Shortcuts

torch.Storage

A torch.Storage is a contiguous, one-dimensional array of a single data type.

Every torch.Tensor has a corresponding storage of the same data type.

class torch.DoubleStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.FloatStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.HalfStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file()
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.LongStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.IntStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.ShortStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.CharStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.ByteStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.BoolStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.BFloat16Storage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.ComplexDoubleStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.ComplexFloatStorage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file(filename, shared=False, size=0)Storage

If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.

size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory

  • size (int) – number of elements in the storage

get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.QUInt8Storage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file()
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.QInt8Storage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file()
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.QInt32Storage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file()
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

class torch.QUInt4x2Storage(*args, **kwargs)[source]
bfloat16()

Casts this storage to bfloat16 type

bool()

Casts this storage to bool type

byte()

Casts this storage to byte type

char()

Casts this storage to char type

clone()

Returns a copy of this storage

complex_double()

Casts this storage to complex double type

complex_float()

Casts this storage to complex float type

copy_()
cpu()

Returns a CPU copy of this storage if it’s not already on the CPU

cuda(device=None, non_blocking=False, **kwargs)

Returns a copy of this object in CUDA memory.

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters
  • device (int) – The destination GPU id. Defaults to the current device.

  • non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument.

data_ptr()
device
double()

Casts this storage to double type

dtype
element_size()
fill_()
float()

Casts this storage to float type

static from_buffer()
static from_file()
get_device()
half()

Casts this storage to half type

int()

Casts this storage to int type

is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()

Casts this storage to long type

new()
pin_memory()

Copies the storage to pinned memory, if it’s not already pinned.

resize_()
share_memory_()

Moves the storage to shared memory.

This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.

Returns: self

short()

Casts this storage to short type

size()
tolist()

Returns a list containing the elements of this storage

type(dtype=None, non_blocking=False, **kwargs)

Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (type or string) – The desired type

  • non_blocking (bool) – If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

  • **kwargs – For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources