torch.Storage¶
In PyTorch, a regular tensor is a multi-dimensional array that is defined by the following components:
Storage: The actual data of the tensor, stored as a contiguous, one-dimensional array of bytes.
dtype
: The data type of the elements in the tensor, such as torch.float32 or torch.int64.shape
: A tuple indicating the size of the tensor in each dimension.Stride: The step size needed to move from one element to the next in each dimension.
Offset: The starting point in the storage from which the tensor data begins. This will usually be 0 for newly created tensors.
These components together define the structure and data of a tensor, with the storage holding the actual data and the rest serving as metadata.
Untyped Storage API¶
A torch.UntypedStorage
is a contiguous, one-dimensional array of elements. Its length is equal to the number of
bytes of the tensor. The storage serves as the underlying data container for tensors.
In general, a tensor created in PyTorch using regular constructors such as zeros()
, zeros_like()
or new_zeros()
will produce tensors where there is a one-to-one correspondence between the tensor
storage and the tensor itself.
However, a storage is allowed to be shared by multiple tensors.
For instance, any view of a tensor (obtained through view()
or some, but not all, kinds of indexing
like integers and slices) will point to the same underlying storage as the original tensor.
When serializing and deserializing tensors that share a common storage, the relationship is preserved, and the tensors
continue to point to the same storage. Interestingly, deserializing multiple tensors that point to a single storage
can be faster than deserializing multiple independent tensors.
A tensor storage can be accessed through the untyped_storage()
method. This will return an object of
type torch.UntypedStorage
.
Fortunately, storages have a unique identifier called accessed through the torch.UntypedStorage.data_ptr()
method.
In regular settings, two tensors with the same data storage will have the same storage data_ptr
.
However, tensors themselves can point to two separate storages, one for its data attribute and another for its grad
attribute. Each will require a data_ptr()
of its own. In general, there is no guarantee that a
torch.Tensor.data_ptr()
and torch.UntypedStorage.data_ptr()
match and this should not be assumed to be true.
Untyped storages are somewhat independent of the tensors that are built on them. Practically, this means that tensors with different dtypes or shape can point to the same storage. It also implies that a tensor storage can be changed, as the following example shows:
>>> t = torch.ones(3)
>>> s0 = t.untyped_storage()
>>> s0
0
0
128
63
0
0
128
63
0
0
128
63
[torch.storage.UntypedStorage(device=cpu) of size 12]
>>> s1 = s0.clone()
>>> s1.fill_(0)
0
0
0
0
0
0
0
0
0
0
0
0
[torch.storage.UntypedStorage(device=cpu) of size 12]
>>> # Fill the tensor with a zeroed storage
>>> t.set_(s1, storage_offset=t.storage_offset(), stride=t.stride(), size=t.size())
tensor([0., 0., 0.])
Warning
Please note that directly modifying a tensor’s storage as shown in this example is not a recommended practice.
This low-level manipulation is illustrated solely for educational purposes, to demonstrate the relationship between
tensors and their underlying storages. In general, it’s more efficient and safer to use standard torch.Tensor
methods, such as clone()
and fill_()
, to achieve the same results.
Other than data_ptr
, untyped storage also have other attributes such as filename
(in case the storage points to a file on disk), device
or
is_cuda
for device checks. A storage can also be manipulated in-place or
out-of-place with methods like copy_
, fill_
or
pin_memory
. FOr more information, check the API
reference below. Keep in mind that modifying storages is a low-level API and comes with risks!
Most of these APIs also exist on the tensor level: if present, they should be prioritized over their storage
counterparts.
Special cases¶
We mentioned that a tensor that has a non-None grad
attribute has actually two pieces of data within it.
In this case, untyped_storage()
will return the storage of the data
attribute,
whereas the storage of the gradient can be obtained through tensor.grad.untyped_storage()
.
>>> t = torch.zeros(3, requires_grad=True)
>>> t.sum().backward()
>>> assert list(t.untyped_storage()) == [0] * 12 # the storage of the tensor is just 0s
>>> assert list(t.grad.untyped_storage()) != [0] * 12 # the storage of the gradient isn't
- There are also special cases where tensors do not have a typical storage, or no storage at all:
Tensors on
"meta"
device: Tensors on the"meta"
device are used for shape inference and do not hold actual data.Fake Tensors: Another internal tool used by PyTorch’s compiler is FakeTensor which is based on a similar idea.
Tensor subclasses or tensor-like objects can also display unusual behaviours. In general, we do not expect many use cases to require operating at the Storage level!
- class torch.UntypedStorage(*args, **kwargs)[source][source]¶
-
- copy_()¶
- cuda(device=None, non_blocking=False)[source]¶
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
- Return type
Union[_StorageBase, TypedStorage]
- data_ptr()¶
- element_size()¶
- property filename: Optional[str]¶
Returns the file name associated with this storage.
The file name will be a string if the storage is on CPU and was created via
from_file()
withshared
asTrue
. This attribute isNone
otherwise.
- fill_()¶
- static from_buffer()¶
- static from_file(filename, shared=False, size=0) Storage ¶
Creates a CPU storage backed by a memory-mapped file.
If
shared
isTrue
, then memory is shared between all processes. All changes are written to the file. Ifshared
isFalse
, then the changes on the storage do not affect the file.size
is the number of elements in the storage. Ifshared
isFalse
, then the file must contain at leastsize * sizeof(Type)
bytes (Type
is the type of storage, in the case of anUnTypedStorage
the file must contain at leastsize
bytes). Ifshared
isTrue
the file will be created if needed.- Parameters
filename (str) – file name to map
shared (bool) – whether to share memory (whether
MAP_SHARED
orMAP_PRIVATE
is passed to the underlying mmap(2) call)size (int) – number of elements in the storage
- hpu(device=None, non_blocking=False)[source]¶
Returns a copy of this object in HPU memory.
If this object is already in HPU memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
- Return type
Union[_StorageBase, TypedStorage]
- property is_cuda¶
- property is_hpu¶
- is_pinned(device='cuda')[source]¶
Determine whether the CPU storage is already pinned on device.
- Parameters
device (str or torch.device) – The device to pin memory on. Default:
'cuda'
.- Returns
A boolean variable.
- nbytes()¶
- new()¶
- pin_memory(device='cuda')[source]¶
Copy the CPU storage to pinned memory, if it’s not already pinned.
- Parameters
device (str or torch.device) – The device to pin memory on. Default:
'cuda'
.- Returns
A pinned CPU storage.
- resizable()¶
- resize_()¶
Moves the storage to shared memory.
This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.
Note that to mitigate issues like this it is thread safe to call this function from multiple threads on the same object. It is NOT thread safe though to call any other function on self without proper synchronization. Please see Multiprocessing best practices for more details.
Note
When all references to a storage in shared memory are deleted, the associated shared memory object will also be deleted. PyTorch has a special cleanup process to ensure that this happens even if the current process exits unexpectedly.
It is worth noting the difference between
share_memory_()
andfrom_file()
withshared = True
share_memory_
uses shm_open(3) to create a POSIX shared memory object whilefrom_file()
uses open(2) to open the filename passed by the user.Both use an mmap(2) call with
MAP_SHARED
to map the file/object into the current virtual address spaceshare_memory_
will callshm_unlink(3)
on the object after mapping it to make sure the shared memory object is freed when no process has the object open.torch.from_file(shared=True)
does not unlink the file. This file is persistent and will remain until it is deleted by the user.
- Returns
self
- type(dtype=None, non_blocking=False)[source]¶
- Return type
Union[_StorageBase, TypedStorage]
Legacy Typed Storage¶
Warning
For historical context, PyTorch previously used typed storage classes, which are
now deprecated and should be avoided. The following details this API in case you
should encounter it, although its usage is highly discouraged.
All storage classes except for torch.UntypedStorage
will be removed
in the future, and torch.UntypedStorage
will be used in all cases.
torch.Storage
is an alias for the storage class that corresponds with
the default data type (torch.get_default_dtype()
). For example, if the
default data type is torch.float
, torch.Storage
resolves to
torch.FloatStorage
.
The torch.<type>Storage
and torch.cuda.<type>Storage
classes,
like torch.FloatStorage
, torch.IntStorage
, etc., are not
actually ever instantiated. Calling their constructors creates
a torch.TypedStorage
with the appropriate torch.dtype
and
torch.device
. torch.<type>Storage
classes have all of the
same class methods that torch.TypedStorage
has.
A torch.TypedStorage
is a contiguous, one-dimensional array of
elements of a particular torch.dtype
. It can be given any
torch.dtype
, and the internal data will be interpreted appropriately.
torch.TypedStorage
contains a torch.UntypedStorage
which
holds the data as an untyped array of bytes.
Every strided torch.Tensor
contains a torch.TypedStorage
,
which stores all of the data that the torch.Tensor
views.
- class torch.TypedStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- cuda(device=None, non_blocking=False)[source][source]¶
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- property device¶
- property filename: Optional[str]¶
Returns the file name associated with this storage if the storage was memory mapped from a file. or
None
if the storage was not created by memory mapping a file.
- classmethod from_file(filename, shared=False, size=0) Storage [source][source]¶
Creates a CPU storage backed by a memory-mapped file.
If
shared
isTrue
, then memory is shared between all processes. All changes are written to the file. Ifshared
isFalse
, then the changes on the storage do not affect the file.size
is the number of elements in the storage. Ifshared
isFalse
, then the file must contain at leastsize * sizeof(Type)
bytes (Type
is the type of storage). Ifshared
isTrue
the file will be created if needed.- Parameters
filename (str) – file name to map
shared (bool) –
whether to share memory (whether
MAP_SHARED
orMAP_PRIVATE
is passed to the underlying mmap(2) call)size (int) – number of elements in the storage
- hpu(device=None, non_blocking=False)[source][source]¶
Returns a copy of this object in HPU memory.
If this object is already in HPU memory and on the correct device, then no copy is performed and the original object is returned.
- property is_cuda¶
- property is_hpu¶
- is_pinned(device='cuda')[source][source]¶
Determine whether the CPU TypedStorage is already pinned on device.
- Parameters
device (str or torch.device) – The device to pin memory on. Default:
'cuda'
- Returns
A boolean variable.
- pin_memory(device='cuda')[source][source]¶
Copy the CPU TypedStorage to pinned memory, if it’s not already pinned.
- Parameters
device (str or torch.device) – The device to pin memory on. Default:
'cuda'
.- Returns
A pinned CPU storage.
- to(*, device, non_blocking=False)[source][source]¶
Returns a copy of this object in device memory.
If this object is already on the correct device, then no copy is performed and the original object is returned.
- type(dtype=None, non_blocking=False)[source][source]¶
Returns the type if dtype is not provided, else casts this object to the specified type.
If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters
dtype (type or string) – The desired type
non_blocking (bool) – If
True
, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.**kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument. Theasync
arg is deprecated.
- Return type
Union[_StorageBase, TypedStorage, str]
- untyped()[source][source]¶
Return the internal
torch.UntypedStorage
.
- class torch.DoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.float64[source]¶
- class torch.FloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.float32[source]¶
- class torch.HalfStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.float16[source]¶
- class torch.LongStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.int64[source]¶
- class torch.IntStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.int32[source]¶
- class torch.ShortStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.int16[source]¶
- class torch.CharStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.int8[source]¶
- class torch.ByteStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.uint8[source]¶
- class torch.BoolStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.bool[source]¶
- class torch.BFloat16Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.bfloat16[source]¶
- class torch.ComplexDoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.complex128[source]¶
- class torch.ComplexFloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.complex64[source]¶
- class torch.QUInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.quint8[source]¶
- class torch.QInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.qint8[source]¶
- class torch.QInt32Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source][source]¶
- dtype: torch.dtype = torch.qint32[source]¶