Shortcuts

torch.Storage

A torch.Storage is a contiguous, one-dimensional array of a single data type.

Every torch.Tensor has a corresponding storage of the same data type.

class torch.DoubleStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.float64[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.FloatStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.float32[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.HalfStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.float16[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.LongStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.int64[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.IntStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.int32[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.ShortStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.int16[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.CharStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.int8[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.ByteStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.uint8[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.BoolStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.bool[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.BFloat16Storage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.bfloat16[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.ComplexDoubleStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.complex128[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.ComplexFloatStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.complex64[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.QUInt8Storage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.quint8[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.QInt8Storage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.qint8[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.QInt32Storage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.qint32[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)
class torch.QUInt4x2Storage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_(source, non_blocking=None)
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)
data_ptr()
property device
double()

Casts this storage to double type

dtype = torch.quint4x2[source]
element_size()
fill_(value)
float()

Casts this storage to float type

classmethod from_buffer(*args, **kwargs)
classmethod from_file(filename, shared, size)
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()

Casts this storage to long type

nbytes()
pickle_storage_type()
pin_memory()

Coppies the storage to pinned memory, if it’s not already pinned.

resize_(size)
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources