Shortcuts

torch.Tensor¶

A torch.Tensor is a multi-dimensional matrix containing elements of a single data type.

Data types¶

Torch defines tensor types with the following data types:

Data type

dtype

32-bit floating point

torch.float32 or torch.float

64-bit floating point

torch.float64 or torch.double

16-bit floating point 1

torch.float16 or torch.half

16-bit floating point 2

torch.bfloat16

32-bit complex

torch.complex32 or torch.chalf

64-bit complex

torch.complex64 or torch.cfloat

128-bit complex

torch.complex128 or torch.cdouble

8-bit integer (unsigned)

torch.uint8

16-bit integer (unsigned)

torch.uint16 (limited support) 4

32-bit integer (unsigned)

torch.uint32 (limited support) 4

64-bit integer (unsigned)

torch.uint64 (limited support) 4

8-bit integer (signed)

torch.int8

16-bit integer (signed)

torch.int16 or torch.short

32-bit integer (signed)

torch.int32 or torch.int

64-bit integer (signed)

torch.int64 or torch.long

Boolean

torch.bool

quantized 8-bit integer (unsigned)

torch.quint8

quantized 8-bit integer (signed)

torch.qint8

quantized 32-bit integer (signed)

torch.qint32

quantized 4-bit integer (unsigned) 3

torch.quint4x2

8-bit floating point, e4m3 5

torch.float8_e4m3fn (limited support)

8-bit floating point, e5m2 5

torch.float8_e5m2 (limited support)

1

Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important at the expense of range.

2

Sometimes referred to as Brain Floating Point: uses 1 sign, 8 exponent, and 7 significand bits. Useful when range is important, since it has the same number of exponent bits as float32

3

quantized 4-bit integer is stored as a 8-bit signed integer. Currently it’s only supported in EmbeddingBag operator.

4(1,2,3)

Unsigned types asides from uint8 are currently planned to only have limited support in eager mode (they primarily exist to assist usage with torch.compile); if you need eager support and the extra range is not needed, we recommend using their signed variants instead. See https://github.com/pytorch/pytorch/issues/58734 for more details.

5(1,2)

torch.float8_e4m3fn and torch.float8_e5m2 implement the spec for 8-bit floating point types from https://arxiv.org/abs/2209.05433. The op support is very limited.

For backwards compatibility, we support the following alternate class names for these data types:

Data type

CPU tensor

GPU tensor

32-bit floating point

torch.FloatTensor

torch.cuda.FloatTensor

64-bit floating point

torch.DoubleTensor

torch.cuda.DoubleTensor

16-bit floating point

torch.HalfTensor

torch.cuda.HalfTensor

16-bit floating point

torch.BFloat16Tensor

torch.cuda.BFloat16Tensor

8-bit integer (unsigned)

torch.ByteTensor

torch.cuda.ByteTensor

8-bit integer (signed)

torch.CharTensor

torch.cuda.CharTensor

16-bit integer (signed)

torch.ShortTensor

torch.cuda.ShortTensor

32-bit integer (signed)

torch.IntTensor

torch.cuda.IntTensor

64-bit integer (signed)

torch.LongTensor

torch.cuda.LongTensor

Boolean

torch.BoolTensor

torch.cuda.BoolTensor

However, to construct tensors, we recommend using factory functions such as torch.empty() with the dtype argument instead. The torch.Tensor constructor is an alias for the default tensor type (torch.FloatTensor).

Initializing and basic operations¶

A tensor can be constructed from a Python list or sequence using the torch.tensor() constructor:

>>> torch.tensor([[1., -1.], [1., -1.]])
tensor([[ 1.0000, -1.0000],
[ 1.0000, -1.0000]])
>>> torch.tensor(np.array([[1, 2, 3], [4, 5, 6]]))
tensor([[ 1,  2,  3],
[ 4,  5,  6]])


Warning

torch.tensor() always copies data. If you have a Tensor data and just want to change its requires_grad flag, use requires_grad_() or detach() to avoid a copy. If you have a numpy array and want to avoid a copy, use torch.as_tensor().

A tensor of specific data type can be constructed by passing a torch.dtype and/or a torch.device to a constructor or tensor creation op:

>>> torch.zeros([2, 4], dtype=torch.int32)
tensor([[ 0,  0,  0,  0],
[ 0,  0,  0,  0]], dtype=torch.int32)
>>> cuda0 = torch.device('cuda:0')
>>> torch.ones([2, 4], dtype=torch.float64, device=cuda0)
tensor([[ 1.0000,  1.0000,  1.0000,  1.0000],
[ 1.0000,  1.0000,  1.0000,  1.0000]], dtype=torch.float64, device='cuda:0')


The contents of a tensor can be accessed and modified using Python’s indexing and slicing notation:

>>> x = torch.tensor([[1, 2, 3], [4, 5, 6]])
>>> print(x[1][2])
tensor(6)
>>> x[0][1] = 8
>>> print(x)
tensor([[ 1,  8,  3],
[ 4,  5,  6]])


Use torch.Tensor.item() to get a Python number from a tensor containing a single value:

>>> x = torch.tensor([[1]])
>>> x
tensor([[ 1]])
>>> x.item()
1
>>> x = torch.tensor(2.5)
>>> x
tensor(2.5000)
>>> x.item()
2.5


A tensor can be created with requires_grad=True so that torch.autograd records operations on them for automatic differentiation.

>>> x = torch.tensor([[1., -1.], [1., 1.]], requires_grad=True)
>>> out = x.pow(2).sum()
>>> out.backward()
tensor([[ 2.0000, -2.0000],
[ 2.0000,  2.0000]])


Each tensor has an associated torch.Storage, which holds its data. The tensor class also provides multi-dimensional, strided view of a storage and defines numeric operations on it.

Note

Note

Methods which mutate a tensor are marked with an underscore suffix. For example, torch.FloatTensor.abs_() computes the absolute value in-place and returns the modified tensor, while torch.FloatTensor.abs() computes the result in a new tensor.

Note

To change an existing tensor’s torch.device and/or torch.dtype, consider using to() method on the tensor.

Warning

Current implementation of torch.Tensor introduces memory overhead, thus it might lead to unexpectedly high memory usage in the applications with many tiny tensors. If this is your case, consider using one large structure.

Tensor class reference¶

class torch.Tensor

There are a few main ways to create a tensor, depending on your use case.

• To create a tensor with pre-existing data, use torch.tensor().

• To create a tensor with specific size, use torch.* tensor creation ops (see Creation Ops).

• To create a tensor with the same size (and similar types) as another tensor, use torch.*_like tensor creation ops (see Creation Ops).

• To create a tensor with similar type but different size as another tensor, use tensor.new_* creation ops.

Tensor.T

Returns a view of this tensor with its dimensions reversed.

If n is the number of dimensions in x, x.T is equivalent to x.permute(n-1, n-2, ..., 0).

Warning

The use of Tensor.T() on tensors of dimension other than 2 to reverse their shape is deprecated and it will throw an error in a future release. Consider mT to transpose batches of matrices or x.permute(*torch.arange(x.ndim - 1, -1, -1)) to reverse the dimensions of a tensor.

Tensor.H

Returns a view of a matrix (2-D tensor) conjugated and transposed.

x.H is equivalent to x.transpose(0, 1).conj() for complex matrices and x.transpose(0, 1) for real matrices.

mH: An attribute that also works on batches of matrices.

Tensor.mT

Returns a view of this tensor with the last two dimensions transposed.

x.mT is equivalent to x.transpose(-2, -1).

Tensor.mH

Accessing this property is equivalent to calling adjoint().

 Tensor.new_tensor Returns a new Tensor with data as the tensor data. Tensor.new_full Returns a Tensor of size size filled with fill_value. Tensor.new_empty Returns a Tensor of size size filled with uninitialized data. Tensor.new_ones Returns a Tensor of size size filled with 1. Tensor.new_zeros Returns a Tensor of size size filled with 0. Tensor.is_cuda Is True if the Tensor is stored on the GPU, False otherwise. Tensor.is_quantized Is True if the Tensor is quantized, False otherwise. Tensor.is_meta Is True if the Tensor is a meta tensor, False otherwise. Tensor.device Is the torch.device where this Tensor is. Tensor.grad This attribute is None by default and becomes a Tensor the first time a call to backward() computes gradients for self. Tensor.ndim Alias for dim() Tensor.real Returns a new tensor containing real values of the self tensor for a complex-valued input tensor. Tensor.imag Returns a new tensor containing imaginary values of the self tensor. Tensor.nbytes Returns the number of bytes consumed by the "view" of elements of the Tensor if the Tensor does not use sparse storage layout. Tensor.itemsize Alias for element_size() Tensor.abs Tensor.abs_ In-place version of abs() Tensor.absolute Alias for abs() Tensor.absolute_ In-place version of absolute() Alias for abs_() Tensor.acos Tensor.acos_ In-place version of acos() Tensor.arccos Tensor.arccos_ In-place version of arccos() Tensor.add Add a scalar or tensor to self tensor. Tensor.add_ In-place version of add() Tensor.addbmm Tensor.addbmm_ In-place version of addbmm() Tensor.addcdiv Tensor.addcdiv_ In-place version of addcdiv() Tensor.addcmul Tensor.addcmul_ In-place version of addcmul() Tensor.addmm Tensor.addmm_ In-place version of addmm() Tensor.sspaddmm Tensor.addmv Tensor.addmv_ In-place version of addmv() Tensor.addr Tensor.addr_ In-place version of addr() Tensor.adjoint Alias for adjoint() Tensor.allclose Tensor.amax Tensor.amin Tensor.aminmax Tensor.angle Tensor.apply_ Applies the function callable to each element in the tensor, replacing each element with the value returned by callable. Tensor.argmax Tensor.argmin Tensor.argsort Tensor.argwhere Tensor.asin Tensor.asin_ In-place version of asin() Tensor.arcsin Tensor.arcsin_ In-place version of arcsin() Tensor.as_strided Tensor.atan Tensor.atan_ In-place version of atan() Tensor.arctan Tensor.arctan_ In-place version of arctan() Tensor.atan2 Tensor.atan2_ In-place version of atan2() Tensor.arctan2 Tensor.arctan2_ atan2_(other) -> Tensor Tensor.all Tensor.any Tensor.backward Computes the gradient of current tensor wrt graph leaves. Tensor.baddbmm Tensor.baddbmm_ In-place version of baddbmm() Tensor.bernoulli Returns a result tensor where each $\texttt{result[i]}$ is independently sampled from $\text{Bernoulli}(\texttt{self[i]})$. Tensor.bernoulli_ Fills each location of self with an independent sample from $\text{Bernoulli}(\texttt{p})$. Tensor.bfloat16 self.bfloat16() is equivalent to self.to(torch.bfloat16). Tensor.bincount Tensor.bitwise_not Tensor.bitwise_not_ In-place version of bitwise_not() Tensor.bitwise_and Tensor.bitwise_and_ In-place version of bitwise_and() Tensor.bitwise_or Tensor.bitwise_or_ In-place version of bitwise_or() Tensor.bitwise_xor Tensor.bitwise_xor_ In-place version of bitwise_xor() Tensor.bitwise_left_shift Tensor.bitwise_left_shift_ In-place version of bitwise_left_shift() Tensor.bitwise_right_shift Tensor.bitwise_right_shift_ In-place version of bitwise_right_shift() Tensor.bmm Tensor.bool self.bool() is equivalent to self.to(torch.bool). Tensor.byte self.byte() is equivalent to self.to(torch.uint8). Tensor.broadcast_to Tensor.cauchy_ Fills the tensor with numbers drawn from the Cauchy distribution: Tensor.ceil Tensor.ceil_ In-place version of ceil() Tensor.char self.char() is equivalent to self.to(torch.int8). Tensor.cholesky Tensor.cholesky_inverse Tensor.cholesky_solve Tensor.chunk Tensor.clamp Tensor.clamp_ In-place version of clamp() Tensor.clip Alias for clamp(). Tensor.clip_ Alias for clamp_(). Tensor.clone Tensor.contiguous Returns a contiguous in memory tensor containing the same data as self tensor. Tensor.copy_ Copies the elements from src into self tensor and returns self. Tensor.conj Tensor.conj_physical Tensor.conj_physical_ In-place version of conj_physical() Tensor.resolve_conj Tensor.resolve_neg Tensor.copysign Tensor.copysign_ In-place version of copysign() Tensor.cos Tensor.cos_ In-place version of cos() Tensor.cosh Tensor.cosh_ In-place version of cosh() Tensor.corrcoef Tensor.count_nonzero Tensor.cov Tensor.acosh Tensor.acosh_ In-place version of acosh() Tensor.arccosh acosh() -> Tensor Tensor.arccosh_ acosh_() -> Tensor Tensor.cpu Returns a copy of this object in CPU memory. Tensor.cross Tensor.cuda Returns a copy of this object in CUDA memory. Tensor.logcumsumexp Tensor.cummax Tensor.cummin Tensor.cumprod Tensor.cumprod_ In-place version of cumprod() Tensor.cumsum Tensor.cumsum_ In-place version of cumsum() Tensor.chalf self.chalf() is equivalent to self.to(torch.complex32). Tensor.cfloat self.cfloat() is equivalent to self.to(torch.complex64). Tensor.cdouble self.cdouble() is equivalent to self.to(torch.complex128). Tensor.data_ptr Returns the address of the first element of self tensor. Tensor.deg2rad Tensor.dequantize Given a quantized Tensor, dequantize it and return the dequantized float Tensor. Tensor.det Tensor.dense_dim Return the number of dense dimensions in a sparse tensor self. Tensor.detach Returns a new Tensor, detached from the current graph. Tensor.detach_ Detaches the Tensor from the graph that created it, making it a leaf. Tensor.diag Tensor.diag_embed Tensor.diagflat Tensor.diagonal Tensor.diagonal_scatter Tensor.fill_diagonal_ Fill the main diagonal of a tensor that has at least 2-dimensions. Tensor.fmax Tensor.fmin Tensor.diff Tensor.digamma Tensor.digamma_ In-place version of digamma() Tensor.dim Returns the number of dimensions of self tensor. Tensor.dim_order Returns a tuple of int describing the dim order or physical layout of self. Tensor.dist Tensor.div Tensor.div_ In-place version of div() Tensor.divide Tensor.divide_ In-place version of divide() Tensor.dot Tensor.double self.double() is equivalent to self.to(torch.float64). Tensor.dsplit Tensor.element_size Returns the size in bytes of an individual element. Tensor.eq Tensor.eq_ In-place version of eq() Tensor.equal Tensor.erf Tensor.erf_ In-place version of erf() Tensor.erfc Tensor.erfc_ In-place version of erfc() Tensor.erfinv Tensor.erfinv_ In-place version of erfinv() Tensor.exp Tensor.exp_ In-place version of exp() Tensor.expm1 Tensor.expm1_ In-place version of expm1() Tensor.expand Returns a new view of the self tensor with singleton dimensions expanded to a larger size. Tensor.expand_as Expand this tensor to the same size as other. Tensor.exponential_ Fills self tensor with elements drawn from the PDF (probability density function): Tensor.fix Tensor.fix_ In-place version of fix() Tensor.fill_ Fills self tensor with the specified value. Tensor.flatten Tensor.flip Tensor.fliplr Tensor.flipud Tensor.float self.float() is equivalent to self.to(torch.float32). Tensor.float_power Tensor.float_power_ In-place version of float_power() Tensor.floor Tensor.floor_ In-place version of floor() Tensor.floor_divide Tensor.floor_divide_ In-place version of floor_divide() Tensor.fmod Tensor.fmod_ In-place version of fmod() Tensor.frac Tensor.frac_ In-place version of frac() Tensor.frexp Tensor.gather Tensor.gcd Tensor.gcd_ In-place version of gcd() Tensor.ge Tensor.ge_ In-place version of ge(). Tensor.greater_equal Tensor.greater_equal_ In-place version of greater_equal(). Tensor.geometric_ Fills self tensor with elements drawn from the geometric distribution: Tensor.geqrf Tensor.ger Tensor.get_device For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. Tensor.gt Tensor.gt_ In-place version of gt(). Tensor.greater Tensor.greater_ In-place version of greater(). Tensor.half self.half() is equivalent to self.to(torch.float16). Tensor.hardshrink Tensor.heaviside Tensor.histc Tensor.histogram Tensor.hsplit Tensor.hypot Tensor.hypot_ In-place version of hypot() Tensor.i0 Tensor.i0_ In-place version of i0() Tensor.igamma Tensor.igamma_ In-place version of igamma() Tensor.igammac Tensor.igammac_ In-place version of igammac() Tensor.index_add_ Accumulate the elements of alpha times source into the self tensor by adding to the indices in the order given in index. Tensor.index_add Out-of-place version of torch.Tensor.index_add_(). Tensor.index_copy_ Copies the elements of tensor into the self tensor by selecting the indices in the order given in index. Tensor.index_copy Out-of-place version of torch.Tensor.index_copy_(). Tensor.index_fill_ Fills the elements of the self tensor with value value by selecting the indices in the order given in index. Tensor.index_fill Out-of-place version of torch.Tensor.index_fill_(). Tensor.index_put_ Puts values from the tensor values into the tensor self using the indices specified in indices (which is a tuple of Tensors). Tensor.index_put Out-place version of index_put_(). Tensor.index_reduce_ Accumulate the elements of source into the self tensor by accumulating to the indices in the order given in index using the reduction given by the reduce argument. Tensor.index_reduce Tensor.index_select Tensor.indices Return the indices tensor of a sparse COO tensor. Tensor.inner Tensor.int self.int() is equivalent to self.to(torch.int32). Tensor.int_repr Given a quantized Tensor, self.int_repr() returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor. Tensor.inverse Tensor.isclose Tensor.isfinite Tensor.isinf Tensor.isposinf Tensor.isneginf Tensor.isnan Tensor.is_contiguous Returns True if self tensor is contiguous in memory in the order specified by memory format. Tensor.is_complex Returns True if the data type of self is a complex data type. Tensor.is_conj Returns True if the conjugate bit of self is set to true. Tensor.is_floating_point Returns True if the data type of self is a floating point data type. Tensor.is_inference See torch.is_inference() Tensor.is_leaf All Tensors that have requires_grad which is False will be leaf Tensors by convention. Tensor.is_pinned Returns true if this tensor resides in pinned memory. Tensor.is_set_to Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride). Tensor.is_shared Checks if tensor is in shared memory. Tensor.is_signed Returns True if the data type of self is a signed data type. Tensor.is_sparse Is True if the Tensor uses sparse COO storage layout, False otherwise. Tensor.istft Tensor.isreal Tensor.item Returns the value of this tensor as a standard Python number. Tensor.kthvalue Tensor.lcm Tensor.lcm_ In-place version of lcm() Tensor.ldexp Tensor.ldexp_ In-place version of ldexp() Tensor.le Tensor.le_ In-place version of le(). Tensor.less_equal Tensor.less_equal_ In-place version of less_equal(). Tensor.lerp Tensor.lerp_ In-place version of lerp() Tensor.lgamma Tensor.lgamma_ In-place version of lgamma() Tensor.log Tensor.log_ In-place version of log() Tensor.logdet Tensor.log10 Tensor.log10_ In-place version of log10() Tensor.log1p Tensor.log1p_ In-place version of log1p() Tensor.log2 Tensor.log2_ In-place version of log2() Tensor.log_normal_ Fills self tensor with numbers samples from the log-normal distribution parameterized by the given mean $\mu$ and standard deviation $\sigma$. Tensor.logaddexp Tensor.logaddexp2 Tensor.logsumexp Tensor.logical_and Tensor.logical_and_ In-place version of logical_and() Tensor.logical_not Tensor.logical_not_ In-place version of logical_not() Tensor.logical_or Tensor.logical_or_ In-place version of logical_or() Tensor.logical_xor Tensor.logical_xor_ In-place version of logical_xor() Tensor.logit Tensor.logit_ In-place version of logit() Tensor.long self.long() is equivalent to self.to(torch.int64). Tensor.lt Tensor.lt_ In-place version of lt(). Tensor.less lt(other) -> Tensor Tensor.less_ In-place version of less(). Tensor.lu Tensor.lu_solve Tensor.as_subclass Makes a cls instance with the same data pointer as self. Tensor.map_ Applies callable for each element in self tensor and the given tensor and stores the results in self tensor. Tensor.masked_scatter_ Copies elements from source into self tensor at positions where the mask is True. Tensor.masked_scatter Out-of-place version of torch.Tensor.masked_scatter_() Tensor.masked_fill_ Fills elements of self tensor with value where mask is True. Tensor.masked_fill Out-of-place version of torch.Tensor.masked_fill_() Tensor.masked_select Tensor.matmul Tensor.matrix_power Note matrix_power() is deprecated, use torch.linalg.matrix_power() instead. Tensor.matrix_exp Tensor.max Tensor.maximum Tensor.mean Tensor.module_load Defines how to transform other when loading it into self in load_state_dict(). Tensor.nanmean Tensor.median Tensor.nanmedian Tensor.min Tensor.minimum Tensor.mm Tensor.smm Tensor.mode Tensor.movedim Tensor.moveaxis Tensor.msort Tensor.mul Tensor.mul_ In-place version of mul(). Tensor.multiply Tensor.multiply_ In-place version of multiply(). Tensor.multinomial Tensor.mv Tensor.mvlgamma Tensor.mvlgamma_ In-place version of mvlgamma() Tensor.nansum Tensor.narrow Tensor.narrow_copy Tensor.ndimension Alias for dim() Tensor.nan_to_num Tensor.nan_to_num_ In-place version of nan_to_num(). Tensor.ne Tensor.ne_ In-place version of ne(). Tensor.not_equal Tensor.not_equal_ In-place version of not_equal(). Tensor.neg Tensor.neg_ In-place version of neg() Tensor.negative Tensor.negative_ In-place version of negative() Tensor.nelement Alias for numel() Tensor.nextafter Tensor.nextafter_ In-place version of nextafter() Tensor.nonzero Tensor.norm Tensor.normal_ Fills self tensor with elements samples from the normal distribution parameterized by mean and std. Tensor.numel Tensor.numpy Returns the tensor as a NumPy ndarray. Tensor.orgqr Tensor.ormqr Tensor.outer Tensor.permute Tensor.pin_memory Copies the tensor to pinned memory, if it's not already pinned. Tensor.pinverse Tensor.polygamma Tensor.polygamma_ In-place version of polygamma() Tensor.positive Tensor.pow Tensor.pow_ In-place version of pow() Tensor.prod Tensor.put_ Copies the elements from source into the positions specified by index. Tensor.qr Tensor.qscheme Returns the quantization scheme of a given QTensor. Tensor.quantile Tensor.nanquantile Tensor.q_scale Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer(). Tensor.q_zero_point Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer(). Tensor.q_per_channel_scales Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer. Tensor.q_per_channel_zero_points Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer. Tensor.q_per_channel_axis Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied. Tensor.rad2deg Tensor.random_ Fills self tensor with numbers sampled from the discrete uniform distribution over [from, to - 1]. Tensor.ravel Tensor.reciprocal Tensor.reciprocal_ In-place version of reciprocal() Tensor.record_stream Marks the tensor as having been used by this stream. Tensor.register_hook Registers a backward hook. Tensor.register_post_accumulate_grad_hook Registers a backward hook that runs after grad accumulation. Tensor.remainder Tensor.remainder_ In-place version of remainder() Tensor.renorm Tensor.renorm_ In-place version of renorm() Tensor.repeat Repeats this tensor along the specified dimensions. Tensor.repeat_interleave Tensor.requires_grad Is True if gradients need to be computed for this Tensor, False otherwise. Tensor.requires_grad_ Change if autograd should record operations on this tensor: sets this tensor's requires_grad attribute in-place. Tensor.reshape Returns a tensor with the same data and number of elements as self but with the specified shape. Tensor.reshape_as Returns this tensor as the same shape as other. Tensor.resize_ Resizes self tensor to the specified size. Tensor.resize_as_ Resizes the self tensor to be the same size as the specified tensor. Tensor.retain_grad Enables this Tensor to have their grad populated during backward(). Tensor.retains_grad Is True if this Tensor is non-leaf and its grad is enabled to be populated during backward(), False otherwise. Tensor.roll Tensor.rot90 Tensor.round Tensor.round_ In-place version of round() Tensor.rsqrt Tensor.rsqrt_ In-place version of rsqrt() Tensor.scatter Out-of-place version of torch.Tensor.scatter_() Tensor.scatter_ Writes all values from the tensor src into self at the indices specified in the index tensor. Tensor.scatter_add_ Adds all values from the tensor src into self at the indices specified in the index tensor in a similar fashion as scatter_(). Tensor.scatter_add Out-of-place version of torch.Tensor.scatter_add_() Tensor.scatter_reduce_ Reduces all values from the src tensor to the indices specified in the index tensor in the self tensor using the applied reduction defined via the reduce argument ("sum", "prod", "mean", "amax", "amin"). Tensor.scatter_reduce Out-of-place version of torch.Tensor.scatter_reduce_() Tensor.select Tensor.select_scatter Tensor.set_ Sets the underlying storage, size, and strides. Tensor.share_memory_ Moves the underlying storage to shared memory. Tensor.short self.short() is equivalent to self.to(torch.int16). Tensor.sigmoid Tensor.sigmoid_ In-place version of sigmoid() Tensor.sign Tensor.sign_ In-place version of sign() Tensor.signbit Tensor.sgn Tensor.sgn_ In-place version of sgn() Tensor.sin Tensor.sin_ In-place version of sin() Tensor.sinc Tensor.sinc_ In-place version of sinc() Tensor.sinh Tensor.sinh_ In-place version of sinh() Tensor.asinh Tensor.asinh_ In-place version of asinh() Tensor.arcsinh Tensor.arcsinh_ In-place version of arcsinh() Tensor.shape Returns the size of the self tensor. Tensor.size Returns the size of the self tensor. Tensor.slogdet Tensor.slice_scatter Tensor.softmax Alias for torch.nn.functional.softmax(). Tensor.sort Tensor.split Tensor.sparse_mask Returns a new sparse tensor with values from a strided tensor self filtered by the indices of the sparse tensor mask. Tensor.sparse_dim Return the number of sparse dimensions in a sparse tensor self. Tensor.sqrt Tensor.sqrt_ In-place version of sqrt() Tensor.square Tensor.square_ In-place version of square() Tensor.squeeze Tensor.squeeze_ In-place version of squeeze() Tensor.std Tensor.stft Tensor.storage Returns the underlying TypedStorage. Tensor.untyped_storage Returns the underlying UntypedStorage. Tensor.storage_offset Returns self tensor's offset in the underlying storage in terms of number of storage elements (not bytes). Tensor.storage_type Returns the type of the underlying storage. Tensor.stride Returns the stride of self tensor. Tensor.sub Tensor.sub_ In-place version of sub() Tensor.subtract Tensor.subtract_ In-place version of subtract(). Tensor.sum Tensor.sum_to_size Sum this tensor to size. Tensor.svd Tensor.swapaxes Tensor.swapdims Tensor.t Tensor.t_ In-place version of t() Tensor.tensor_split Tensor.tile Tensor.to Performs Tensor dtype and/or device conversion. Tensor.to_mkldnn Returns a copy of the tensor in torch.mkldnn layout. Tensor.take Tensor.take_along_dim Tensor.tan Tensor.tan_ In-place version of tan() Tensor.tanh Tensor.tanh_ In-place version of tanh() Tensor.atanh Tensor.atanh_ In-place version of atanh() Tensor.arctanh Tensor.arctanh_ In-place version of arctanh() Tensor.tolist Returns the tensor as a (nested) list. Tensor.topk Tensor.to_dense Creates a strided copy of self if self is not a strided tensor, otherwise returns self. Tensor.to_sparse Returns a sparse copy of the tensor. Tensor.to_sparse_csr Convert a tensor to compressed row storage format (CSR). Tensor.to_sparse_csc Convert a tensor to compressed column storage (CSC) format. Tensor.to_sparse_bsr Convert a tensor to a block sparse row (BSR) storage format of given blocksize. Tensor.to_sparse_bsc Convert a tensor to a block sparse column (BSC) storage format of given blocksize. Tensor.trace Tensor.transpose Tensor.transpose_ In-place version of transpose() Tensor.triangular_solve Tensor.tril Tensor.tril_ In-place version of tril() Tensor.triu Tensor.triu_ In-place version of triu() Tensor.true_divide Tensor.true_divide_ In-place version of true_divide_() Tensor.trunc Tensor.trunc_ In-place version of trunc() Tensor.type Returns the type if dtype is not provided, else casts this object to the specified type. Tensor.type_as Returns this tensor cast to the type of the given tensor. Tensor.unbind Tensor.unflatten Tensor.unfold Returns a view of the original tensor which contains all slices of size size from self tensor in the dimension dimension. Tensor.uniform_ Fills self tensor with numbers sampled from the continuous uniform distribution: Tensor.unique Returns the unique elements of the input tensor. Tensor.unique_consecutive Eliminates all but the first element from every consecutive group of equivalent elements. Tensor.unsqueeze Tensor.unsqueeze_ In-place version of unsqueeze() Tensor.values Return the values tensor of a sparse COO tensor. Tensor.var Tensor.vdot Tensor.view Returns a new tensor with the same data as the self tensor but of a different shape. Tensor.view_as View this tensor as the same size as other. Tensor.vsplit Tensor.where self.where(condition, y) is equivalent to torch.where(condition, self, y). Tensor.xlogy Tensor.xlogy_ In-place version of xlogy() Tensor.zero_ Fills self tensor with zeros.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials