Shortcuts

torch.Storage

torch.Storage is an alias for the storage class that corresponds with the default data type (torch.get_default_dtype()). For instance, if the default data type is torch.float, torch.Storage resolves to torch.FloatStorage.

The torch.<type>Storage and torch.cuda.<type>Storage classes, like torch.FloatStorage, torch.IntStorage, etc., are not actually ever instantiated. Calling their constructors creates a torch.TypedStorage with the appropriate torch.dtype and torch.device. torch.<type>Storage classes have all of the same class methods that torch.TypedStorage has.

A torch.TypedStorage is a contiguous, one-dimensional array of elements of a particular torch.dtype. It can be given any torch.dtype, and the internal data will be interpreted appropriately. torch.TypedStorage contains a torch.UntypedStorage which holds the data as an untyped array of bytes.

Every strided torch.Tensor contains a torch.TypedStorage, which stores all of the data that the torch.Tensor views.

Warning

All storage classes except for torch.UntypedStorage will be removed in the future, and torch.UntypedStorage will be used in all cases.

class torch.TypedStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
bfloat16()[source]

Casts this storage to bfloat16 type.

bool()[source]

Casts this storage to bool type.

byte()[source]

Casts this storage to byte type.

char()[source]

Casts this storage to char type.

clone()[source]

Return a copy of this storage.

complex_double()[source]

Casts this storage to complex double type.

complex_float()[source]

Casts this storage to complex float type.

copy_(source, non_blocking=None)[source]
cpu()[source]

Return a CPU copy of this storage if it’s not already on the CPU.

cuda(device=None, non_blocking=False, **kwargs)[source]

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

Return type

T

data_ptr()[source]
property device
double()[source]

Casts this storage to double type.

dtype: dtype
element_size()[source]
property filename: Optional[str]

Returns the file name associated with this storage if the storage was memory mapped from a file. or None if the storage was not created by memory mapping a file.

fill_(value)[source]
float()[source]

Casts this storage to float type.

float8_e4m3fn()[source]

Casts this storage to float8_e4m3fn type

float8_e5m2()[source]

Casts this storage to float8_e5m2 type

classmethod from_buffer(*args, **kwargs)[source]
classmethod from_file(filename, shared=False, size=0) Storage[source]

Creates a CPU storage backed by a memory-mapped file.

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory (whether MAP_SHARED or MAP_PRIVATE is passed to the underlying mmap(2) call)

  • size (int) – number of elements in the storage

get_device()[source]
Return type

int

half()[source]

Casts this storage to half type.

hpu(device=None, non_blocking=False, **kwargs)[source]

Returns a copy of this object in HPU memory.

If this object is already in HPU memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination HPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

Return type

T

int()[source]

Casts this storage to int type.

property is_cuda
property is_hpu
is_pinned(device='cuda')[source]

Determine whether the CPU TypedStorage is already pinned on device.

Parameters

device (str or torch.device) – The device to pin memory on. Default: 'cuda'

Returns

A boolean variable.

is_shared()[source]
is_sparse = False
long()[source]

Casts this storage to long type.

nbytes()[source]
pickle_storage_type()[source]
pin_memory(device='cuda')[source]

Copy the CPU TypedStorage to pinned memory, if it’s not already pinned.

Parameters

device (str or torch.device) – The device to pin memory on. Default: 'cuda'.

Returns

A pinned CPU storage.

resize_(size)[source]
share_memory_()[source]

See torch.UntypedStorage.share_memory_()

short()[source]

Casts this storage to short type.

size()[source]
tolist()[source]

Return a list containing the elements of this storage.

type(dtype=None, non_blocking=False)[source]

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

Return type

Union[T, str]

untyped()[source]

Return the internal torch.UntypedStorage.

class torch.UntypedStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type.

bool()

Casts this storage to bool type.

byte()

Casts this storage to byte type.

byteswap(dtype)

Swap bytes in underlying data.

char()

Casts this storage to char type.

clone()

Return a copy of this storage.

complex_double()

Casts this storage to complex double type.

complex_float()

Casts this storage to complex float type.

copy_()
cpu()

Return a CPU copy of this storage if it’s not already on the CPU.

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device: device
double()

Casts this storage to double type.

element_size()
property filename: Optional[str]

Returns the file name associated with this storage if the storage was memory mapped from a file. or None if the storage was not created by memory mapping a file.

fill_()
float()

Casts this storage to float type.

float8_e4m3fn()

Casts this storage to float8_e4m3fn type

float8_e5m2()

Casts this storage to float8_e5m2 type

static from_buffer()
static from_file(filename, shared=False, size=0) Storage

Creates a CPU storage backed by a memory-mapped file.

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage, in the case of an UnTypedStorage the file must contain at least size bytes). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) –

    whether to share memory (whether MAP_SHARED or MAP_PRIVATE is passed to the underlying mmap(2) call)

  • size (int) – number of elements in the storage

get_device()
Return type

int

half()

Casts this storage to half type.

hpu(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in HPU memory.

If this object is already in HPU memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination HPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

int()

Casts this storage to int type.

property is_cuda
property is_hpu
is_pinned(device='cuda')

Determine whether the CPU storage is already pinned on device.

Parameters

device (str or torch.device) – The device to pin memory on. Default: 'cuda'.

Returns

A boolean variable.

is_shared()
is_sparse: bool = False
is_sparse_csr: bool = False
long()

Casts this storage to long type.

mps()

Return a MPS copy of this storage if it’s not already on the MPS.

nbytes()
new()
pin_memory(device='cuda')

Copy the CPU storage to pinned memory, if it’s not already pinned.

Parameters

device (str or torch.device) – The device to pin memory on. Default: 'cuda'.

Returns

A pinned CPU storage.

resize_()
share_memory_(*args, **kwargs)[source]

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Note that to mitigate issues like this it is thread safe to call this function from multiple threads on the same object. It is NOT thread safe though to call any other function on self without proper synchronization. Please see Multiprocessing best practices for more details.

Note

When all references to a storage in shared memory are deleted, the associated shared memory object will also be deleted. PyTorch has a special cleanup process to ensure that this happens even if the current process exits unexpectedly.

It is worth noting the difference between share_memory_() and from_file() with shared = True

  1. share_memory_ uses shm_open(3) to create a POSIX shared memory object while from_file() uses open(2) to open the filename passed by the user.

  2. Both use an mmap(2) call with MAP_SHARED to map the file/object into the current virtual address space

  3. share_memory_ will call shm_unlink(3) on the object after mapping it to make sure the shared memory object is freed when no process has the object open. torch.from_file(shared=True) does not unlink the file. This file is persistent and will remain until it is deleted by the user.

Returns

self

short()

Casts this storage to short type.

size()
Return type

int

tolist()

Return a list containing the elements of this storage.

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

untyped()
class torch.DoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.float64[source]
class torch.FloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.float32[source]
class torch.HalfStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.float16[source]
class torch.LongStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.int64[source]
class torch.IntStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.int32[source]
class torch.ShortStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.int16[source]
class torch.CharStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.int8[source]
class torch.ByteStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.uint8[source]
class torch.BoolStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.bool[source]
class torch.BFloat16Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.bfloat16[source]
class torch.ComplexDoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.complex128[source]
class torch.ComplexFloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.complex64[source]
class torch.QUInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.quint8[source]
class torch.QInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.qint8[source]
class torch.QInt32Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.qint32[source]
class torch.QUInt4x2Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.quint4x2[source]
class torch.QUInt2x4Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]
dtype: dtype = torch.quint2x4[source]

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources