Shortcuts

torch

The torch package contains data structures for multi-dimensional tensors and mathematical operations over these are defined. Additionally, it provides many utilities for efficient serializing of Tensors and arbitrary types, and other useful utilities.

It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0.

Tensors

torch.is_tensor(obj)[source]

Returns True if obj is a PyTorch tensor.

Parameters

obj (Object) – Object to test

torch.is_storage(obj)[source]

Returns True if obj is a PyTorch storage object.

Parameters

obj (Object) – Object to test

torch.is_floating_point(input) -> (bool)

Returns True if the data type of input is a floating point data type i.e., one of torch.float64, torch.float32 and torch.float16.

Parameters

input (Tensor) – the PyTorch tensor to test

torch.set_default_dtype(d)[source]

Sets the default floating point dtype to d. This type will be used as default floating point type for type inference in torch.tensor().

The default floating point dtype is initially torch.float32.

Parameters

d (torch.dtype) – the floating point dtype to make the default

Example:

>>> torch.tensor([1.2, 3]).dtype           # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_dtype(torch.float64)
>>> torch.tensor([1.2, 3]).dtype           # a new floating point tensor
torch.float64
torch.get_default_dtype() → torch.dtype

Get the current default floating point torch.dtype.

Example:

>>> torch.get_default_dtype()  # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_dtype(torch.float64)
>>> torch.get_default_dtype()  # default is now changed to torch.float64
torch.float64
>>> torch.set_default_tensor_type(torch.FloatTensor)  # setting tensor type also affects this
>>> torch.get_default_dtype()  # changed to torch.float32, the dtype for torch.FloatTensor
torch.float32
torch.set_default_tensor_type(t)[source]

Sets the default torch.Tensor type to floating point tensor type t. This type will also be used as default floating point type for type inference in torch.tensor().

The default floating point tensor type is initially torch.FloatTensor.

Parameters

t (type or string) – the floating point tensor type or its name

Example:

>>> torch.tensor([1.2, 3]).dtype    # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_tensor_type(torch.DoubleTensor)
>>> torch.tensor([1.2, 3]).dtype    # a new floating point tensor
torch.float64
torch.numel(input) → int

Returns the total number of elements in the input tensor.

Parameters

input (Tensor) – the input tensor.

Example:

>>> a = torch.randn(1, 2, 3, 4, 5)
>>> torch.numel(a)
120
>>> a = torch.zeros(4,4)
>>> torch.numel(a)
16
torch.set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None, sci_mode=None)[source]

Set options for printing. Items shamelessly taken from NumPy

Parameters
  • precision – Number of digits of precision for floating point output (default = 4).

  • threshold – Total number of array elements which trigger summarization rather than full repr (default = 1000).

  • edgeitems – Number of array items in summary at beginning and end of each dimension (default = 3).

  • linewidth – The number of characters per line for the purpose of inserting line breaks (default = 80). Thresholded matrices will ignore this parameter.

  • profile – Sane defaults for pretty printing. Can override with any of the above options. (any one of default, short, full)

  • sci_mode – Enable (True) or disable (False) scientific notation. If None (default) is specified, the value is defined by _Formatter

torch.set_flush_denormal(mode) → bool

Disables denormal floating numbers on CPU.

Returns True if your system supports flushing denormal numbers and it successfully configures flush denormal mode. set_flush_denormal() is only supported on x86 architectures supporting SSE3.

Parameters

mode (bool) – Controls whether to enable flush denormal mode or not

Example:

>>> torch.set_flush_denormal(True)
True
>>> torch.tensor([1e-323], dtype=torch.float64)
tensor([ 0.], dtype=torch.float64)
>>> torch.set_flush_denormal(False)
True
>>> torch.tensor([1e-323], dtype=torch.float64)
tensor(9.88131e-324 *
       [ 1.0000], dtype=torch.float64)

Creation Ops

Note

Random sampling creation ops are listed under Random sampling and include: torch.rand() torch.rand_like() torch.randn() torch.randn_like() torch.randint() torch.randint_like() torch.randperm() You may also use torch.empty() with the In-place random sampling methods to create torch.Tensor s with values sampled from a broader range of distributions.

torch.tensor(data, dtype=None, device=None, requires_grad=False, pin_memory=False) → Tensor

Constructs a tensor with data.

Warning

torch.tensor() always copies data. If you have a Tensor data and want to avoid a copy, use torch.Tensor.requires_grad_() or torch.Tensor.detach(). If you have a NumPy ndarray and want to avoid a copy, use torch.as_tensor().

Warning

When data is a tensor x, torch.tensor() reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Therefore torch.tensor(x) is equivalent to x.clone().detach() and torch.tensor(x, requires_grad=True) is equivalent to x.clone().detach().requires_grad_(True). The equivalents using clone() and detach() are recommended.

Parameters
  • data (array_like) – Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, infers data type from data.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

  • pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: False.

Example:

>>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
tensor([[ 0.1000,  1.2000],
        [ 2.2000,  3.1000],
        [ 4.9000,  5.2000]])

>>> torch.tensor([0, 1])  # Type inference on data
tensor([ 0,  1])

>>> torch.tensor([[0.11111, 0.222222, 0.3333333]],
                 dtype=torch.float64,
                 device=torch.device('cuda:0'))  # creates a torch.cuda.DoubleTensor
tensor([[ 0.1111,  0.2222,  0.3333]], dtype=torch.float64, device='cuda:0')

>>> torch.tensor(3.14159)  # Create a scalar (zero-dimensional tensor)
tensor(3.1416)

>>> torch.tensor([])  # Create an empty tensor (of size (0,))
tensor([])
torch.sparse_coo_tensor(indices, values, size=None, dtype=None, device=None, requires_grad=False) → Tensor

Constructs a sparse tensors in COO(rdinate) format with non-zero elements at the given indices with the given values. A sparse tensor can be uncoalesced, in that case, there are duplicate coordinates in the indices, and the value at that index is the sum of all duplicate value entries: torch.sparse.

Parameters
  • indices (array_like) – Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types. Will be cast to a torch.LongTensor internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values.

  • values (array_like) – Initial values for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.

  • size (list, tuple, or torch.Size, optional) – Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, infers data type from values.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> i = torch.tensor([[0, 1, 1],
                      [2, 0, 2]])
>>> v = torch.tensor([3, 4, 5], dtype=torch.float32)
>>> torch.sparse_coo_tensor(i, v, [2, 4])
tensor(indices=tensor([[0, 1, 1],
                       [2, 0, 2]]),
       values=tensor([3., 4., 5.]),
       size=(2, 4), nnz=3, layout=torch.sparse_coo)

>>> torch.sparse_coo_tensor(i, v)  # Shape inference
tensor(indices=tensor([[0, 1, 1],
                       [2, 0, 2]]),
       values=tensor([3., 4., 5.]),
       size=(2, 3), nnz=3, layout=torch.sparse_coo)

>>> torch.sparse_coo_tensor(i, v, [2, 4],
                            dtype=torch.float64,
                            device=torch.device('cuda:0'))
tensor(indices=tensor([[0, 1, 1],
                       [2, 0, 2]]),
       values=tensor([3., 4., 5.]),
       device='cuda:0', size=(2, 4), nnz=3, dtype=torch.float64,
       layout=torch.sparse_coo)

# Create an empty sparse tensor with the following invariants:
#   1. sparse_dim + dense_dim = len(SparseTensor.shape)
#   2. SparseTensor._indices().shape = (sparse_dim, nnz)
#   3. SparseTensor._values().shape = (nnz, SparseTensor.shape[sparse_dim:])
#
# For instance, to create an empty sparse tensor with nnz = 0, dense_dim = 0 and
# sparse_dim = 1 (hence indices is a 2D tensor of shape = (1, 0))
>>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), [], [1])
tensor(indices=tensor([], size=(1, 0)),
       values=tensor([], size=(0,)),
       size=(1,), nnz=0, layout=torch.sparse_coo)

# and to create an empty sparse tensor with nnz = 0, dense_dim = 1 and
# sparse_dim = 1
>>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), torch.empty([0, 2]), [1, 2])
tensor(indices=tensor([], size=(1, 0)),
       values=tensor([], size=(0, 2)),
       size=(1, 2), nnz=0, layout=torch.sparse_coo)
torch.as_tensor(data, dtype=None, device=None) → Tensor

Convert the data into a torch.Tensor. If the data is already a Tensor with the same dtype and device, no copy will be performed, otherwise a new Tensor will be returned with computational graph retained if data Tensor has requires_grad=True. Similarly, if the data is an ndarray of the corresponding dtype and the device is the cpu, no copy will be performed.

Parameters
  • data (array_like) – Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, infers data type from data.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

Example:

>>> a = numpy.array([1, 2, 3])
>>> t = torch.as_tensor(a)
>>> t
tensor([ 1,  2,  3])
>>> t[0] = -1
>>> a
array([-1,  2,  3])

>>> a = numpy.array([1, 2, 3])
>>> t = torch.as_tensor(a, device=torch.device('cuda'))
>>> t
tensor([ 1,  2,  3])
>>> t[0] = -1
>>> a
array([1,  2,  3])
torch.as_strided(input, size, stride, storage_offset=0) → Tensor

Create a view of an existing torch.Tensor input with specified size, stride and storage_offset.

Warning

More than one element of a created tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.

Many PyTorch functions, which return a view of a tensor, are internally implemented with this function. Those functions, like torch.Tensor.expand(), are easier to read and are therefore more advisable to use.

Parameters
  • input (Tensor) – the input tensor.

  • size (tuple or ints) – the shape of the output tensor

  • stride (tuple or ints) – the stride of the output tensor

  • storage_offset (int, optional) – the offset in the underlying storage of the output tensor

Example:

>>> x = torch.randn(3, 3)
>>> x
tensor([[ 0.9039,  0.6291,  1.0795],
        [ 0.1586,  2.1939, -0.4900],
        [-0.1909, -0.7503,  1.9355]])
>>> t = torch.as_strided(x, (2, 2), (1, 2))
>>> t
tensor([[0.9039, 1.0795],
        [0.6291, 0.1586]])
>>> t = torch.as_strided(x, (2, 2), (1, 2), 1)
tensor([[0.6291, 0.1586],
        [1.0795, 2.1939]])
torch.from_numpy(ndarray) → Tensor

Creates a Tensor from a numpy.ndarray.

The returned tensor and ndarray share the same memory. Modifications to the tensor will be reflected in the ndarray and vice versa. The returned tensor is not resizable.

It currently accepts ndarray with dtypes of numpy.float64, numpy.float32, numpy.float16, numpy.int64, numpy.int32, numpy.int16, numpy.int8, numpy.uint8, and numpy.bool.

Example:

>>> a = numpy.array([1, 2, 3])
>>> t = torch.from_numpy(a)
>>> t
tensor([ 1,  2,  3])
>>> t[0] = -1
>>> a
array([-1,  2,  3])
torch.zeros(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size.

Parameters
  • size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.zeros(2, 3)
tensor([[ 0.,  0.,  0.],
        [ 0.,  0.,  0.]])

>>> torch.zeros(5)
tensor([ 0.,  0.,  0.,  0.,  0.])
torch.zeros_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor

Returns a tensor filled with the scalar value 0, with the same size as input. torch.zeros_like(input) is equivalent to torch.zeros(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

Warning

As of 0.4, this function does not support an out keyword. As an alternative, the old torch.zeros_like(input, out=output) is equivalent to torch.zeros(input.size(), out=output).

Parameters
  • input (Tensor) – the size of input will determine size of the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.

  • layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> input = torch.empty(2, 3)
>>> torch.zeros_like(input)
tensor([[ 0.,  0.,  0.],
        [ 0.,  0.,  0.]])
torch.ones(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size.

Parameters
  • size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.ones(2, 3)
tensor([[ 1.,  1.,  1.],
        [ 1.,  1.,  1.]])

>>> torch.ones(5)
tensor([ 1.,  1.,  1.,  1.,  1.])
torch.ones_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor

Returns a tensor filled with the scalar value 1, with the same size as input. torch.ones_like(input) is equivalent to torch.ones(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

Warning

As of 0.4, this function does not support an out keyword. As an alternative, the old torch.ones_like(input, out=output) is equivalent to torch.ones(input.size(), out=output).

Parameters
  • input (Tensor) – the size of input will determine size of the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.

  • layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> input = torch.empty(2, 3)
>>> torch.ones_like(input)
tensor([[ 1.,  1.,  1.],
        [ 1.,  1.,  1.]])
torch.arange(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a 1-D tensor of size endstartstep\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil with values from the interval [start, end) taken with common difference step beginning from start.

Note that non-integer step is subject to floating point rounding errors when comparing against end; to avoid inconsistency, we advise adding a small epsilon to end in such cases.

outi+1=outi+step\text{out}_{{i+1}} = \text{out}_{i} + \text{step}
Parameters
  • start (Number) – the starting value for the set of points. Default: 0.

  • end (Number) – the ending value for the set of points

  • step (Number) – the gap between each pair of adjacent points. Default: 1.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see get_default_dtype(). Otherwise, the dtype is inferred to be torch.int64.

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.arange(5)
tensor([ 0,  1,  2,  3,  4])
>>> torch.arange(1, 4)
tensor([ 1,  2,  3])
>>> torch.arange(1, 2.5, 0.5)
tensor([ 1.0000,  1.5000,  2.0000])
torch.range(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a 1-D tensor of size endstartstep+1\left\lfloor \frac{\text{end} - \text{start}}{\text{step}} \right\rfloor + 1 with values from start to end with step step. Step is the gap between two values in the tensor.

outi+1=outi+step.\text{out}_{i+1} = \text{out}_i + \text{step}.

Warning

This function is deprecated in favor of torch.arange().

Parameters
  • start (float) – the starting value for the set of points. Default: 0.

  • end (float) – the ending value for the set of points

  • step (float) – the gap between each pair of adjacent points. Default: 1.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see get_default_dtype(). Otherwise, the dtype is inferred to be torch.int64.

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.range(1, 4)
tensor([ 1.,  2.,  3.,  4.])
>>> torch.range(1, 4, 0.5)
tensor([ 1.0000,  1.5000,  2.0000,  2.5000,  3.0000,  3.5000,  4.0000])
torch.linspace(start, end, steps=100, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a one-dimensional tensor of steps equally spaced points between start and end.

The output tensor is 1-D of size steps.

Parameters
  • start (float) – the starting value for the set of points

  • end (float) – the ending value for the set of points

  • steps (int) – number of points to sample between start and end. Default: 100.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.linspace(3, 10, steps=5)
tensor([  3.0000,   4.7500,   6.5000,   8.2500,  10.0000])
>>> torch.linspace(-10, 10, steps=5)
tensor([-10.,  -5.,   0.,   5.,  10.])
>>> torch.linspace(start=-10, end=10, steps=5)
tensor([-10.,  -5.,   0.,   5.,  10.])
>>> torch.linspace(start=-10, end=10, steps=1)
tensor([-10.])
torch.logspace(start, end, steps=100, base=10.0, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a one-dimensional tensor of steps points logarithmically spaced with base base between basestart{\text{base}}^{\text{start}} and baseend{\text{base}}^{\text{end}} .

The output tensor is 1-D of size steps.

Parameters
  • start (float) – the starting value for the set of points

  • end (float) – the ending value for the set of points

  • steps (int) – number of points to sample between start and end. Default: 100.

  • base (float) – base of the logarithm function. Default: 10.0.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.logspace(start=-10, end=10, steps=5)
tensor([ 1.0000e-10,  1.0000e-05,  1.0000e+00,  1.0000e+05,  1.0000e+10])
>>> torch.logspace(start=0.1, end=1.0, steps=5)
tensor([  1.2589,   2.1135,   3.5481,   5.9566,  10.0000])
>>> torch.logspace(start=0.1, end=1.0, steps=1)
tensor([1.2589])
>>> torch.logspace(start=2, end=2, steps=1, base=2)
tensor([4.0])
torch.eye(n, m=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.

Parameters
  • n (int) – the number of rows

  • m (int, optional) – the number of columns with default being n

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Returns

A 2-D tensor with ones on the diagonal and zeros elsewhere

Return type

Tensor

Example:

>>> torch.eye(3)
tensor([[ 1.,  0.,  0.],
        [ 0.,  1.,  0.],
        [ 0.,  0.,  1.]])
torch.empty(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) → Tensor

Returns a tensor filled with uninitialized data. The shape of the tensor is defined by the variable argument size.

Parameters
  • size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

  • pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: False.

Example:

>>> torch.empty(2, 3)
tensor(1.00000e-08 *
       [[ 6.3984,  0.0000,  0.0000],
        [ 0.0000,  0.0000,  0.0000]])
torch.empty_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor

Returns an uninitialized tensor with the same size as input. torch.empty_like(input) is equivalent to torch.empty(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

Parameters
  • input (Tensor) – the size of input will determine size of the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.

  • layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.empty((2,3), dtype=torch.int64)
tensor([[ 9.4064e+13,  2.8000e+01,  9.3493e+13],
        [ 7.5751e+18,  7.1428e+18,  7.5955e+18]])
torch.empty_strided(size, stride, dtype=None, layout=None, device=None, requires_grad=False, pin_memory=False) → Tensor

Returns a tensor filled with uninitialized data. The shape and strides of the tensor is defined by the variable argument size and stride respectively. torch.empty_strided(size, stride) is equivalent to torch.empty(size).as_strided(size, stride).

Warning

More than one element of the created tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.

Parameters
  • size (tuple of python:ints) – the shape of the output tensor

  • stride (tuple of python:ints) – the strides of the output tensor

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

  • pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: False.

Example:

>>> a = torch.empty_strided((2, 3), (1, 2))
>>> a
tensor([[8.9683e-44, 4.4842e-44, 5.1239e+07],
        [0.0000e+00, 0.0000e+00, 3.0705e-41]])
>>> a.stride()
(1, 2)
>>> a.size()
torch.Size([2, 3])
torch.full(size, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a tensor of size size filled with fill_value.

Parameters
  • size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor.

  • fill_value – the number to fill the output tensor with.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.full((2, 3), 3.141592)
tensor([[ 3.1416,  3.1416,  3.1416],
        [ 3.1416,  3.1416,  3.1416]])
torch.full_like(input, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a tensor with the same size as input filled with fill_value. torch.full_like(input, fill_value) is equivalent to torch.full(input.size(), fill_value, dtype=input.dtype, layout=input.layout, device=input.device).

Parameters
  • input (Tensor) – the size of input will determine size of the output tensor.

  • fill_value – the number to fill the output tensor with.

  • dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.

  • layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

torch.quantize_per_tensor(Tensor self, float scale, int zero_point, ScalarType dtype) → Tensor

Converts a float tensor to quantized tensor with given scale and zero point.

Parameters
  • input (Tensor) – float tensor to quantize

  • scale (float) – scale to apply in quantization formula

  • zero_point (int) – offset in integer value that maps to float zero

  • dtype (torch.dtype) – the desired data type of returned tensor. Has to be one of the quantized dtypes: torch.quint8, torch.qint8, torch.qint32

Returns

A newly quantized tensor

Return type

Tensor

Example:

>>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8)
tensor([-1.,  0.,  1.,  2.], size=(4,), dtype=torch.quint8,
       quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)
>>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8).int_repr()
tensor([ 0, 10, 20, 30], dtype=torch.uint8)
torch.quantize_per_channel(Tensor self, Tensor scales, Tensor zero_points, int axis, ScalarType dtype) → Tensor

Converts a float tensor to per-channel quantized tensor with given scales and zero points.

Parameters
  • input (Tensor) – float tensor to quantize

  • scales (Tensor) – float 1D tensor of scales to use, size should match input.size(axis)

  • zero_points (int) – integer 1D tensor of offset to use, size should match input.size(axis)

  • axis (int) – dimension on which apply per-channel quantization

  • dtype (torch.dtype) – the desired data type of returned tensor. Has to be one of the quantized dtypes: torch.quint8, torch.qint8, torch.qint32

Returns

A newly quantized tensor

Return type

Tensor

Example:

>>> x = torch.tensor([[-1.0, 0.0], [1.0, 2.0]])
>>> torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8)
tensor([[-1.,  0.],
        [ 1.,  2.]], size=(2, 2), dtype=torch.quint8,
       quantization_scheme=torch.per_channel_affine,
       scale=tensor([0.1000, 0.0100], dtype=torch.float64),
       zero_point=tensor([10,  0]), axis=0)
>>> torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8).int_repr()
tensor([[  0,  10],
        [100, 200]], dtype=torch.uint8)

Indexing, Slicing, Joining, Mutating Ops

torch.cat(tensors, dim=0, out=None) → Tensor

Concatenates the given sequence of seq tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty.

torch.cat() can be seen as an inverse operation for torch.split() and torch.chunk().

torch.cat() can be best understood via examples.

Parameters
  • tensors (sequence of Tensors) – any python sequence of tensors of the same type. Non-empty tensors provided must have the same shape, except in the cat dimension.

  • dim (int, optional) – the dimension over which the tensors are concatenated

  • out (Tensor, optional) – the output tensor.

Example:

>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497]])
>>> torch.cat((x, x, x), 0)
tensor([[ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497],
        [ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497],
        [ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497]])
>>> torch.cat((x, x, x), 1)
tensor([[ 0.6580, -1.0969, -0.4614,  0.6580, -1.0969, -0.4614,  0.6580,
         -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497, -0.1034, -0.5790,  0.1497, -0.1034,
         -0.5790,  0.1497]])
torch.chunk(input, chunks, dim=0) → List of Tensors

Splits a tensor into a specific number of chunks.

Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by chunks.

Parameters
  • input (Tensor) – the tensor to split

  • chunks (int) – number of chunks to return

  • dim (int) – dimension along which to split the tensor

torch.gather(input, dim, index, out=None, sparse_grad=False) → Tensor

Gathers values along an axis specified by dim.

For a 3-D tensor the output is specified by:

out[i][j][k] = input[index[i][j][k]][j][k]  # if dim == 0
out[i][j][k] = input[i][index[i][j][k]][k]  # if dim == 1
out[i][j][k] = input[i][j][index[i][j][k]]  # if dim == 2

If input is an n-dimensional tensor with size (x0,x1...,xi1,xi,xi+1,...,xn1)(x_0, x_1..., x_{i-1}, x_i, x_{i+1}, ..., x_{n-1}) and dim = i, then index must be an nn -dimensional tensor with size (x0,x1,...,xi1,y,xi+1,...,xn1)(x_0, x_1, ..., x_{i-1}, y, x_{i+1}, ..., x_{n-1}) where y1y \geq 1 and out will have the same size as index.

Parameters
  • input (Tensor) – the source tensor

  • dim (int) – the axis along which to index

  • index (LongTensor) – the indices of elements to gather

  • out (Tensor, optional) – the destination tensor

  • sparse_grad (bool,optional) – If True, gradient w.r.t. input will be a sparse tensor.

Example:

>>> t = torch.tensor([[1,2],[3,4]])
>>> torch.gather(t, 1, torch.tensor([[0,0],[1,0]]))
tensor([[ 1,  1],
        [ 4,  3]])
torch.index_select(input, dim, index, out=None) → Tensor

Returns a new tensor which indexes the input tensor along dimension dim using the entries in index which is a LongTensor.

The returned tensor has the same number of dimensions as the original tensor (input). The dimth dimension has the same size as the length of index; other dimensions have the same size as in the original tensor.

Note

The returned tensor does not use the same storage as the original tensor. If out has a different shape than expected, we silently change it to the correct shape, reallocating the underlying storage if necessary.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int) – the dimension in which we index

  • index (LongTensor) – the 1-D tensor containing the indices to index

  • out (Tensor, optional) – the output tensor.

Example:

>>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.1427,  0.0231, -0.5414, -1.0009],
        [-0.4664,  0.2647, -0.1228, -1.1068],
        [-1.1734, -0.6571,  0.7230, -0.6004]])
>>> indices = torch.tensor([0, 2])
>>> torch.index_select(x, 0, indices)
tensor([[ 0.1427,  0.0231, -0.5414, -1.0009],
        [-1.1734, -0.6571,  0.7230, -0.6004]])
>>> torch.index_select(x, 1, indices)
tensor([[ 0.1427, -0.5414],
        [-0.4664, -0.1228],
        [-1.1734,  0.7230]])
torch.masked_select(input, mask, out=None) → Tensor

Returns a new 1-D tensor which indexes the input tensor according to the boolean mask mask which is a BoolTensor.

The shapes of the mask tensor and the input tensor don’t need to match, but they must be broadcastable.

Note

The returned tensor does not use the same storage as the original tensor

Parameters
  • input (Tensor) – the input tensor.

  • mask (ByteTensor) – the tensor containing the binary mask to index with

  • out (Tensor, optional) – the output tensor.

Example:

>>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.3552, -2.3825, -0.8297,  0.3477],
        [-1.2035,  1.2252,  0.5002,  0.6248],
        [ 0.1307, -2.0608,  0.1244,  2.0139]])
>>> mask = x.ge(0.5)
>>> mask
tensor([[False, False, False, False],
        [False, True, True, True],
        [False, False, False, True]])
>>> torch.masked_select(x, mask)
tensor([ 1.2252,  0.5002,  0.6248,  2.0139])
torch.narrow(input, dim, start, length) → Tensor

Returns a new tensor that is a narrowed version of input tensor. The dimension dim is input from start to start + length. The returned tensor and input tensor share the same underlying storage.

Parameters
  • input (Tensor) – the tensor to narrow

  • dim (int) – the dimension along which to narrow

  • start (int) – the starting dimension

  • length (int) – the distance to the ending dimension

Example:

>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> torch.narrow(x, 0, 0, 2)
tensor([[ 1,  2,  3],
        [ 4,  5,  6]])
>>> torch.narrow(x, 1, 1, 2)
tensor([[ 2,  3],
        [ 5,  6],
        [ 8,  9]])
torch.nonzero(input, *, out=None, as_tuple=False) → LongTensor or tuple of LongTensors

When as_tuple is false or unspecified:

Returns a tensor containing the indices of all non-zero elements of input. Each row in the result contains the indices of a non-zero element in input. The result is sorted lexicographically, with the last index changing the fastest (C-style).

If input has n dimensions, then the resulting indices tensor out is of size (z×n)(z \times n) , where zz is the total number of non-zero elements in the input tensor.

When as_tuple is true:

Returns a tuple of 1-D tensors, one for each dimension in input, each containing the indices (in that dimension) of all non-zero elements of input .

If input has n dimensions, then the resulting tuple contains n tensors of size z, where z is the total number of non-zero elements in the input tensor.

As a special case, when input has zero dimensions and a nonzero scalar value, it is treated as a one-dimensional tensor with one element.

Parameters
  • input (Tensor) – the input tensor.

  • out (LongTensor, optional) – the output tensor containing indices

Returns

If as_tuple is false, the output tensor containing indices. If as_tuple is true, one 1-D tensor for each dimension, containing the indices of each nonzero element along that dimension.

Return type

LongTensor or tuple of LongTensor

Example:

>>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]))
tensor([[ 0],
        [ 1],
        [ 2],
        [ 4]])
>>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
                                [0.0, 0.4, 0.0, 0.0],
                                [0.0, 0.0, 1.2, 0.0],
                                [0.0, 0.0, 0.0,-0.4]]))
tensor([[ 0,  0],
        [ 1,  1],
        [ 2,  2],
        [ 3,  3]])
>>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]), as_tuple=True)
(tensor([0, 1, 2, 4]),)
>>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
                                [0.0, 0.4, 0.0, 0.0],
                                [0.0, 0.0, 1.2, 0.0],
                                [0.0, 0.0, 0.0,-0.4]]), as_tuple=True)
(tensor([0, 1, 2, 3]), tensor([0, 1, 2, 3]))
>>> torch.nonzero(torch.tensor(5), as_tuple=True)
(tensor([0]),)
torch.reshape(input, shape) → Tensor

Returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior.

See torch.Tensor.view() on when it is possible to return a view.

A single dimension may be -1, in which case it’s inferred from the remaining dimensions and the number of elements in input.

Parameters
  • input (Tensor) – the tensor to be reshaped

  • shape (tuple of python:ints) – the new shape

Example:

>>> a = torch.arange(4.)
>>> torch.reshape(a, (2, 2))
tensor([[ 0.,  1.],
        [ 2.,  3.]])
>>> b = torch.tensor([[0, 1], [2, 3]])
>>> torch.reshape(b, (-1,))
tensor([ 0,  1,  2,  3])
torch.split(tensor, split_size_or_sections, dim=0)[source]

Splits the tensor into chunks.

If split_size_or_sections is an integer type, then tensor will be split into equally sized chunks (if possible). Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by split_size.

If split_size_or_sections is a list, then tensor will be split into len(split_size_or_sections) chunks with sizes in dim according to split_size_or_sections.

Parameters
  • tensor (Tensor) – tensor to split.

  • split_size_or_sections (int) or (list(int)) – size of a single chunk or list of sizes for each chunk

  • dim (int) – dimension along which to split the tensor.

torch.squeeze(input, dim=None, out=None) → Tensor

Returns a tensor with all the dimensions of input of size 1 removed.

For example, if input is of shape: (A×1×B×C×1×D)(A \times 1 \times B \times C \times 1 \times D) then the out tensor will be of shape: (A×B×C×D)(A \times B \times C \times D) .

When dim is given, a squeeze operation is done only in the given dimension. If input is of shape: (A×1×B)(A \times 1 \times B) , squeeze(input, 0) leaves the tensor unchanged, but squeeze(input, 1) will squeeze the tensor to the shape (A×B)(A \times B) .

Note

The returned tensor shares the storage with the input tensor, so changing the contents of one will change the contents of the other.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int, optional) – if given, the input will be squeezed only in this dimension

  • out (Tensor, optional) – the output tensor.

Example:

>>> x = torch.zeros(2, 1, 2, 1, 2)
>>> x.size()
torch.Size([2, 1, 2, 1, 2])
>>> y = torch.squeeze(x)
>>> y.size()
torch.Size([2, 2, 2])
>>> y = torch.squeeze(x, 0)
>>> y.size()
torch.Size([2, 1, 2, 1, 2])
>>> y = torch.squeeze(x, 1)
>>> y.size()
torch.Size([2, 2, 1, 2])
torch.stack(tensors, dim=0, out=None) → Tensor

Concatenates sequence of tensors along a new dimension.

All tensors need to be of the same size.

Parameters
  • tensors (sequence of Tensors) – sequence of tensors to concatenate

  • dim (int) – dimension to insert. Has to be between 0 and the number of dimensions of concatenated tensors (inclusive)

  • out (Tensor, optional) – the output tensor.

torch.t(input) → Tensor

Expects input to be <= 2-D tensor and transposes dimensions 0 and 1.

0-D and 1-D tensors are returned as it is and 2-D tensor can be seen as a short-hand function for transpose(input, 0, 1).

Parameters

input (Tensor) – the input tensor.

Example:

>>> x = torch.randn(())
>>> x
tensor(0.1995)
>>> torch.t(x)
tensor(0.1995)
>>> x = torch.randn(3)
>>> x
tensor([ 2.4320, -0.4608,  0.7702])
>>> torch.t(x)
tensor([.2.4320,.-0.4608,..0.7702])
>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.4875,  0.9158, -0.5872],
        [ 0.3938, -0.6929,  0.6932]])
>>> torch.t(x)
tensor([[ 0.4875,  0.3938],
        [ 0.9158, -0.6929],
        [-0.5872,  0.6932]])
torch.take(input, index) → Tensor

Returns a new tensor with the elements of input at the given indices. The input tensor is treated as if it were viewed as a 1-D tensor. The result takes the same shape as the indices.

Parameters
  • input (Tensor) – the input tensor.

  • indices (LongTensor) – the indices into tensor

Example:

>>> src = torch.tensor([[4, 3, 5],
                        [6, 7, 8]])
>>> torch.take(src, torch.tensor([0, 2, 5]))
tensor([ 4,  5,  8])
torch.transpose(input, dim0, dim1) → Tensor

Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped.

The resulting out tensor shares it’s underlying storage with the input tensor, so changing the content of one would change the content of the other.

Parameters
  • input (Tensor) – the input tensor.

  • dim0 (int) – the first dimension to be transposed

  • dim1 (int) – the second dimension to be transposed

Example:

>>> x = torch.randn(2, 3)
>>> x
tensor([[ 1.0028, -0.9893,  0.5809],
        [-0.1669,  0.7299,  0.4942]])
>>> torch.transpose(x, 0, 1)
tensor([[ 1.0028, -0.1669],
        [-0.9893,  0.7299],
        [ 0.5809,  0.4942]])
torch.unbind(input, dim=0) → seq

Removes a tensor dimension.

Returns a tuple of all slices along a given dimension, already without it.

Parameters
  • input (Tensor) – the tensor to unbind

  • dim (int) – dimension to remove

Example:

>>> torch.unbind(torch.tensor([[1, 2, 3],
>>>                            [4, 5, 6],
>>>                            [7, 8, 9]]))
(tensor([1, 2, 3]), tensor([4, 5, 6]), tensor([7, 8, 9]))
torch.unsqueeze(input, dim, out=None) → Tensor

Returns a new tensor with a dimension of size one inserted at the specified position.

The returned tensor shares the same underlying data with this tensor.

A dim value within the range [-input.dim() - 1, input.dim() + 1) can be used. Negative dim will correspond to unsqueeze() applied at dim = dim + input.dim() + 1.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int) – the index at which to insert the singleton dimension

  • out (Tensor, optional) – the output tensor.

Example:

>>> x = torch.tensor([1, 2, 3, 4])
>>> torch.unsqueeze(x, 0)
tensor([[ 1,  2,  3,  4]])
>>> torch.unsqueeze(x, 1)
tensor([[ 1],
        [ 2],
        [ 3],
        [ 4]])
torch.where()
torch.where(condition, x, y) → Tensor

Return a tensor of elements selected from either x or y, depending on condition.

The operation is defined as:

outi={xiif conditioniyiotherwise\text{out}_i = \begin{cases} \text{x}_i & \text{if } \text{condition}_i \\ \text{y}_i & \text{otherwise} \\ \end{cases}

Note

The tensors condition, x, y must be broadcastable.

Parameters
  • condition (BoolTensor) – When True (nonzero), yield x, otherwise yield y

  • x (Tensor) – values selected at indices where condition is True

  • y (Tensor) – values selected at indices where condition is False

Returns

A tensor of shape equal to the broadcasted shape of condition, x, y

Return type

Tensor

Example:

>>> x = torch.randn(3, 2)
>>> y = torch.ones(3, 2)
>>> x
tensor([[-0.4620,  0.3139],
        [ 0.3898, -0.7197],
        [ 0.0478, -0.1657]])
>>> torch.where(x > 0, x, y)
tensor([[ 1.0000,  0.3139],
        [ 0.3898,  1.0000],
        [ 0.0478,  1.0000]])
torch.where(condition) → tuple of LongTensor

torch.where(condition) is identical to torch.nonzero(condition, as_tuple=True).

Note

See also torch.nonzero().

Generators

class torch._C.Generator(device='cpu') → Generator

Creates and returns a generator object which manages the state of the algorithm that produces pseudo random numbers. Used as a keyword argument in many In-place random sampling functions.

Parameters

device (torch.device, optional) – the desired device for the generator.

Returns

An torch.Generator object.

Return type

Generator

Example:

>>> g_cpu = torch.Generator()
>>> g_cuda = torch.Generator(device='cuda')
device

Generator.device -> device

Gets the current device of the generator.

Example:

>>> g_cpu = torch.Generator()
>>> g_cpu.device
device(type='cpu')
get_state() → Tensor

Returns the Generator state as a torch.ByteTensor.

Returns

A torch.ByteTensor which contains all the necessary bits to restore a Generator to a specific point in time.

Return type

Tensor

Example:

>>> g_cpu = torch.Generator()
>>> g_cpu.get_state()
initial_seed() → int

Returns the initial seed for generating random numbers.

Example:

>>> g_cpu = torch.Generator()
>>> g_cpu.initial_seed()
2147483647
manual_seed(seed) → Generator

Sets the seed for generating random numbers. Returns a torch.Generator object. It is recommended to set a large seed, i.e. a number that has a good balance of 0 and 1 bits. Avoid having many 0 bits in the seed.

Parameters

seed (int) – The desired seed.

Returns

An torch.Generator object.

Return type

Generator

Example:

>>> g_cpu = torch.Generator()
>>> g_cpu.manual_seed(2147483647)
seed() → int

Gets a non-deterministic random number from std::random_device or the current time and uses it to seed a Generator.

Example:

>>> g_cpu = torch.Generator()
>>> g_cpu.seed()
1516516984916
set_state(new_state) → void

Sets the Generator state.

Parameters

new_state (torch.ByteTensor) – The desired state.

Example:

>>> g_cpu = torch.Generator()
>>> g_cpu_other = torch.Generator()
>>> g_cpu.set_state(g_cpu_other.get_state())

Random sampling

torch.seed()[source]

Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG.

torch.manual_seed(seed)[source]

Sets the seed for generating random numbers. Returns a torch.Generator object.

Parameters

seed (int) – The desired seed.

torch.initial_seed()[source]

Returns the initial seed for generating random numbers as a Python long.

torch.get_rng_state()[source]

Returns the random number generator state as a torch.ByteTensor.

torch.set_rng_state(new_state)[source]

Sets the random number generator state.

Parameters

new_state (torch.ByteTensor) – The desired state

torch.default_generator Returns the default CPU torch.Generator
torch.bernoulli(input, *, generator=None, out=None) → Tensor

Draws binary random numbers (0 or 1) from a Bernoulli distribution.

The input tensor should be a tensor containing probabilities to be used for drawing the binary random number. Hence, all values in input have to be in the range: 0inputi10 \leq \text{input}_i \leq 1 .

The ith\text{i}^{th} element of the output tensor will draw a value 11 according to the ith\text{i}^{th} probability value given in input.

outiBernoulli(p=inputi)\text{out}_{i} \sim \mathrm{Bernoulli}(p = \text{input}_{i})

The returned out tensor only has values 0 or 1 and is of the same shape as input.

out can have integral dtype, but input must have floating point dtype.

Parameters
  • input (Tensor) – the input tensor of probability values for the Bernoulli distribution

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.empty(3, 3).uniform_(0, 1)  # generate a uniform random matrix with range [0, 1]
>>> a
tensor([[ 0.1737,  0.0950,  0.3609],
        [ 0.7148,  0.0289,  0.2676],
        [ 0.9456,  0.8937,  0.7202]])
>>> torch.bernoulli(a)
tensor([[ 1.,  0.,  0.],
        [ 0.,  0.,  0.],
        [ 1.,  1.,  1.]])

>>> a = torch.ones(3, 3) # probability of drawing "1" is 1
>>> torch.bernoulli(a)
tensor([[ 1.,  1.,  1.],
        [ 1.,  1.,  1.],
        [ 1.,  1.,  1.]])
>>> a = torch.zeros(3, 3) # probability of drawing "1" is 0
>>> torch.bernoulli(a)
tensor([[ 0.,  0.,  0.],
        [ 0.,  0.,  0.],
        [ 0.,  0.,  0.]])
torch.multinomial(input, num_samples, replacement=False, out=None) → LongTensor

Returns a tensor where each row contains num_samples indices sampled from the multinomial probability distribution located in the corresponding row of tensor input.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Indices are ordered from left to right according to when each was sampled (first samples are placed in first column).

If input is a vector, out is a vector of size num_samples.

If input is a matrix with m rows, out is an matrix of shape (m×num_samples)(m \times \text{num\_samples}) .

If replacement is True, samples are drawn with replacement.

If not, they are drawn without replacement, which means that when a sample index is drawn for a row, it cannot be drawn again for that row.

Note

When drawn without replacement, num_samples must be lower than number of non-zero elements in input (or the min number of non-zero elements in each row of input if it is a matrix).

Parameters
  • input (Tensor) – the input tensor containing probabilities

  • num_samples (int) – number of samples to draw

  • replacement (bool, optional) – whether to draw with replacement or not

  • out (Tensor, optional) – the output tensor.

Example:

>>> weights = torch.tensor([0, 10, 3, 0], dtype=torch.float) # create a tensor of weights
>>> torch.multinomial(weights, 2)
tensor([1, 2])
>>> torch.multinomial(weights, 4) # ERROR!
RuntimeError: invalid argument 2: invalid multinomial distribution (with replacement=False,
not enough non-negative category to sample) at ../aten/src/TH/generic/THTensorRandom.cpp:320
>>> torch.multinomial(weights, 4, replacement=True)
tensor([ 2,  1,  1,  1])
torch.normal()
torch.normal(mean, std, out=None) → Tensor

Returns a tensor of random numbers drawn from separate normal distributions whose mean and standard deviation are given.

The mean is a tensor with the mean of each output element’s normal distribution

The std is a tensor with the standard deviation of each output element’s normal distribution

The shapes of mean and std don’t need to match, but the total number of elements in each tensor need to be the same.

Note

When the shapes do not match, the shape of mean is used as the shape for the returned output tensor

Parameters
  • mean (Tensor) – the tensor of per-element means

  • std (Tensor) – the tensor of per-element standard deviations

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.normal(mean=torch.arange(1., 11.), std=torch.arange(1, 0, -0.1))
tensor([  1.0425,   3.5672,   2.7969,   4.2925,   4.7229,   6.2134,
          8.0505,   8.1408,   9.0563,  10.0566])
torch.normal(mean=0.0, std, out=None) → Tensor

Similar to the function above, but the means are shared among all drawn elements.

Parameters
  • mean (float, optional) – the mean for all distributions

  • std (Tensor) – the tensor of per-element standard deviations

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.normal(mean=0.5, std=torch.arange(1., 6.))
tensor([-1.2793, -1.0732, -2.0687,  5.1177, -1.2303])
torch.normal(mean, std=1.0, out=None) → Tensor

Similar to the function above, but the standard-deviations are shared among all drawn elements.

Parameters
  • mean (Tensor) – the tensor of per-element means

  • std (float, optional) – the standard deviation for all distributions

  • out (Tensor, optional) – the output tensor

Example:

>>> torch.normal(mean=torch.arange(1., 6.))
tensor([ 1.1552,  2.6148,  2.6535,  5.8318,  4.2361])
torch.normal(mean, std, size, *, out=None) → Tensor

Similar to the function above, but the means and standard deviations are shared among all drawn elements. The resulting tensor has size given by size.

Parameters
  • mean (float) – the mean for all distributions

  • std (float) – the standard deviation for all distributions

  • size (int...) – a sequence of integers defining the shape of the output tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.normal(2, 3, size=(1, 4))
tensor([[-1.3987, -1.9544,  3.6048,  0.7909]])
torch.rand(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a tensor filled with random numbers from a uniform distribution on the interval [0,1)[0, 1)

The shape of the tensor is defined by the variable argument size.

Parameters
  • size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.rand(4)
tensor([ 0.5204,  0.2503,  0.3525,  0.5673])
>>> torch.rand(2, 3)
tensor([[ 0.8237,  0.5781,  0.6879],
        [ 0.3816,  0.7249,  0.0998]])
torch.rand_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor

Returns a tensor with the same size as input that is filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) . torch.rand_like(input) is equivalent to torch.rand(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

Parameters
  • input (Tensor) – the size of input will determine size of the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.

  • layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

torch.randint(low=0, high, size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive).

The shape of the tensor is defined by the variable argument size.

Parameters
  • low (int, optional) – Lowest integer to be drawn from the distribution. Default: 0.

  • high (int) – One above the highest integer to be drawn from the distribution.

  • size (tuple) – a tuple defining the shape of the output tensor.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.randint(3, 5, (3,))
tensor([4, 3, 4])


>>> torch.randint(10, (2, 2))
tensor([[0, 2],
        [5, 5]])


>>> torch.randint(3, 10, (2, 2))
tensor([[4, 5],
        [6, 7]])
torch.randint_like(input, low=0, high, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a tensor with the same shape as Tensor input filled with random integers generated uniformly between low (inclusive) and high (exclusive).

Parameters
  • input (Tensor) – the size of input will determine size of the output tensor.

  • low (int, optional) – Lowest integer to be drawn from the distribution. Default: 0.

  • high (int) – One above the highest integer to be drawn from the distribution.

  • dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.

  • layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

torch.randn(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution).

outiN(0,1)\text{out}_{i} \sim \mathcal{N}(0, 1)

The shape of the tensor is defined by the variable argument size.

Parameters
  • size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.randn(4)
tensor([-2.1436,  0.9966,  2.3426, -0.6366])
>>> torch.randn(2, 3)
tensor([[ 1.5954,  2.8929, -1.0923],
        [ 1.1719, -0.4709, -0.1996]])
torch.randn_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor

Returns a tensor with the same size as input that is filled with random numbers from a normal distribution with mean 0 and variance 1. torch.randn_like(input) is equivalent to torch.randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

Parameters
  • input (Tensor) – the size of input will determine size of the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.

  • layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

torch.randperm(n, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False) → LongTensor

Returns a random permutation of integers from 0 to n - 1.

Parameters
  • n (int) – the upper bound (exclusive)

  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: torch.int64.

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.randperm(4)
tensor([2, 1, 0, 3])

In-place random sampling

There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation:

Quasi-random sampling

class torch.quasirandom.SobolEngine(dimension, scramble=False, seed=None)[source]

The torch.quasirandom.SobolEngine is an engine for generating (scrambled) Sobol sequences. Sobol sequences are an example of low discrepancy quasi-random sequences.

This implementation of an engine for Sobol sequences is capable of sampling sequences up to a maximum dimension of 1111. It uses direction numbers to generate these sequences, and these numbers have been adapted from here.

References

  • Art B. Owen. Scrambling Sobol and Niederreiter-Xing points. Journal of Complexity, 14(4):466-489, December 1998.

  • I. M. Sobol. The distribution of points in a cube and the accurate evaluation of integrals. Zh. Vychisl. Mat. i Mat. Phys., 7:784-802, 1967.

Parameters
  • dimension (Int) – The dimensionality of the sequence to be drawn

  • scramble (bool, optional) – Setting this to True will produce scrambled Sobol sequences. Scrambling is capable of producing better Sobol sequences. Default: False.

  • seed (Int, optional) – This is the seed for the scrambling. The seed of the random number generator is set to this, if specified. Otherwise, it uses a random seed. Default: None

Examples:

>>> soboleng = torch.quasirandom.SobolEngine(dimension=5)
>>> soboleng.draw(3)
tensor([[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
        [0.7500, 0.2500, 0.7500, 0.2500, 0.7500],
        [0.2500, 0.7500, 0.2500, 0.7500, 0.2500]])
draw(n=1, out=None, dtype=torch.float32)[source]

Function to draw a sequence of n points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (n,dimension)(n, dimension) .

Parameters
  • n (Int, optional) – The length of sequence of points to draw. Default: 1

  • out (Tensor, optional) – The output tensor

  • dtype (torch.dtype, optional) – the desired data type of the returned tensor. Default: torch.float32

fast_forward(n)[source]

Function to fast-forward the state of the SobolEngine by n steps. This is equivalent to drawing n samples without using the samples.

Parameters

n (Int) – The number of steps to fast-forward by.

reset()[source]

Function to reset the SobolEngine to base state.

Serialization

torch.save(obj, f, pickle_module=<module 'pickle' from '/opt/conda/lib/python3.6/pickle.py'>, pickle_protocol=2)[source]

Saves an object to a disk file.

See also: Recommended approach for saving a model

Parameters
  • obj – saved object

  • f – a file-like object (has to implement write and flush) or a string containing a file name

  • pickle_module – module used for pickling metadata and objects

  • pickle_protocol – can be specified to override the default protocol

Warning

If you are using Python 2, torch.save() does NOT support StringIO.StringIO as a valid file-like object. This is because the write method should return the number of bytes written; StringIO.write() does not do this.

Please use something like io.BytesIO instead.

Example

>>> # Save to file
>>> x = torch.tensor([0, 1, 2, 3, 4])
>>> torch.save(x, 'tensor.pt')
>>> # Save to io.BytesIO buffer
>>> buffer = io.BytesIO()
>>> torch.save(x, buffer)
torch.load(f, map_location=None, pickle_module=<module 'pickle' from '/opt/conda/lib/python3.6/pickle.py'>, **pickle_load_args)[source]

Loads an object saved with torch.save() from a file.

torch.load() uses Python’s unpickling facilities but treats storages, which underlie tensors, specially. They are first deserialized on the CPU and are then moved to the device they were saved from. If this fails (e.g. because the run time system doesn’t have certain devices), an exception is raised. However, storages can be dynamically remapped to an alternative set of devices using the map_location argument.

If map_location is a callable, it will be called once for each serialized storage with two arguments: storage and location. The storage argument will be the initial deserialization of the storage, residing on the CPU. Each serialized storage has a location tag associated with it which identifies the device it was saved from, and this tag is the second argument passed to map_location. The builtin location tags are 'cpu' for CPU tensors and 'cuda:device_id' (e.g. 'cuda:2') for CUDA tensors. map_location should return either None or a storage. If map_location returns a storage, it will be used as the final deserialized object, already moved to the right device. Otherwise, torch.load() will fall back to the default behavior, as if map_location wasn’t specified.

If map_location is a torch.device object or a string contraining a device tag, it indicates the location where all tensors should be loaded.

Otherwise, if map_location is a dict, it will be used to remap location tags appearing in the file (keys), to ones that specify where to put the storages (values).

User extensions can register their own location tags and tagging and deserialization methods using torch.serialization.register_package().

Parameters
  • f – a file-like object (has to implement read(), :meth`readline`, :meth`tell`, and :meth`seek`), or a string containing a file name

  • map_location – a function, torch.device, string or a dict specifying how to remap storage locations

  • pickle_module – module used for unpickling metadata and objects (has to match the pickle_module used to serialize file)

  • pickle_load_args – (Python 3 only) optional keyword arguments passed over to pickle_module.load() and pickle_module.Unpickler(), e.g., errors=....

Note

When you call torch.load() on a file which contains GPU tensors, those tensors will be loaded to GPU by default. You can call torch.load(.., map_location='cpu') and then load_state_dict() to avoid GPU RAM surge when loading a model checkpoint.

Note

By default, we decode byte strings as utf-8. This is to avoid a common error case UnicodeDecodeError: 'ascii' codec can't decode byte 0x... when loading files saved by Python 2 in Python 3. If this default is incorrect, you may use an extra encoding keyword argument to specify how these objects should be loaded, e.g., encoding='latin1' decodes them to strings using latin1 encoding, and encoding='bytes' keeps them as byte arrays which can be decoded later with byte_array.decode(...).

Example

>>> torch.load('tensors.pt')
# Load all tensors onto the CPU
>>> torch.load('tensors.pt', map_location=torch.device('cpu'))
# Load all tensors onto the CPU, using a function
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
# Load all tensors onto GPU 1
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
# Map tensors from GPU 1 to GPU 0
>>> torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
# Load tensor from io.BytesIO object
>>> with open('tensor.pt', 'rb') as f:
        buffer = io.BytesIO(f.read())
>>> torch.load(buffer)
# Load a module with 'ascii' encoding for unpickling
>>> torch.load('module.pt', encoding='ascii')

Parallelism

torch.get_num_threads() → int

Returns the number of threads used for parallelizing CPU operations

torch.set_num_threads(int)

Sets the number of threads used for intraop parallelism on CPU. WARNING: To ensure that the correct number of threads is used, set_num_threads must be called before running eager, JIT or autograd code.

torch.get_num_interop_threads() → int

Returns the number of threads used for inter-op parallelism on CPU (e.g. in JIT interpreter)

torch.set_num_interop_threads(int)

Sets the number of threads used for interop parallelism (e.g. in JIT interpreter) on CPU. WARNING: Can only be called once and before any inter-op parallel work is started (e.g. JIT execution).

Locally disabling gradient computation

The context managers torch.no_grad(), torch.enable_grad(), and torch.set_grad_enabled() are helpful for locally disabling and enabling gradient computation. See Locally disabling gradient computation for more details on their usage. These context managers are thread local, so they won’t work if you send work to another thread using the threading module, etc.

Examples:

>>> x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
...     y = x * 2
>>> y.requires_grad
False

>>> is_train = False
>>> with torch.set_grad_enabled(is_train):
...     y = x * 2
>>> y.requires_grad
False

>>> torch.set_grad_enabled(True)  # this can also be used as a function
>>> y = x * 2
>>> y.requires_grad
True

>>> torch.set_grad_enabled(False)
>>> y = x * 2
>>> y.requires_grad
False

Math operations

Pointwise Ops

torch.abs(input, out=None) → Tensor

Computes the element-wise absolute value of the given input tensor.

outi=inputi\text{out}_{i} = |\text{input}_{i}|
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.abs(torch.tensor([-1, -2, 3]))
tensor([ 1,  2,  3])
torch.acos(input, out=None) → Tensor

Returns a new tensor with the arccosine of the elements of input.

outi=cos1(inputi)\text{out}_{i} = \cos^{-1}(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.3348, -0.5889,  0.2005, -0.1584])
>>> torch.acos(a)
tensor([ 1.2294,  2.2004,  1.3690,  1.7298])
torch.add()
torch.add(input, other, out=None)

Adds the scalar other to each element of the input input and returns a new resulting tensor.

out=input+other\text{out} = \text{input} + \text{other}

If input is of type FloatTensor or DoubleTensor, other must be a real number, otherwise it should be an integer.

Parameters
  • input (Tensor) – the input tensor.

  • value (Number) – the number to be added to each element of input

Keyword Arguments

out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.0202,  1.0985,  1.3506, -0.6056])
>>> torch.add(a, 20)
tensor([ 20.0202,  21.0985,  21.3506,  19.3944])
torch.add(input, alpha=1, other, out=None)

Each element of the tensor other is multiplied by the scalar alpha and added to each element of the tensor input. The resulting tensor is returned.

The shapes of input and other must be broadcastable.

out=input+alpha×other\text{out} = \text{input} + \text{alpha} \times \text{other}

If other is of type FloatTensor or DoubleTensor, alpha must be a real number, otherwise it should be an integer.

Parameters
  • input (Tensor) – the first input tensor

  • alpha (Number) – the scalar multiplier for other

  • other (Tensor) – the second input tensor

Keyword Arguments

out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-0.9732, -0.3497,  0.6245,  0.4022])
>>> b = torch.randn(4, 1)
>>> b
tensor([[ 0.3743],
        [-1.7724],
        [-0.5811],
        [-0.8017]])
>>> torch.add(a, 10, b)
tensor([[  2.7695,   3.3930,   4.3672,   4.1450],
        [-18.6971, -18.0736, -17.0994, -17.3216],
        [ -6.7845,  -6.1610,  -5.1868,  -5.4090],
        [ -8.9902,  -8.3667,  -7.3925,  -7.6147]])
torch.addcdiv(input, value=1, tensor1, tensor2, out=None) → Tensor

Performs the element-wise division of tensor1 by tensor2, multiply the result by the scalar value and add it to input.

outi=inputi+value×tensor1itensor2i\text{out}_i = \text{input}_i + \text{value} \times \frac{\text{tensor1}_i}{\text{tensor2}_i}

The shapes of input, tensor1, and tensor2 must be broadcastable.

For inputs of type FloatTensor or DoubleTensor, value must be a real number, otherwise an integer.

Parameters
  • input (Tensor) – the tensor to be added

  • value (Number, optional) – multiplier for tensor1/tensor2\text{tensor1} / \text{tensor2}

  • tensor1 (Tensor) – the numerator tensor

  • tensor2 (Tensor) – the denominator tensor

  • out (Tensor, optional) – the output tensor.

Example:

>>> t = torch.randn(1, 3)
>>> t1 = torch.randn(3, 1)
>>> t2 = torch.randn(1, 3)
>>> torch.addcdiv(t, 0.1, t1, t2)
tensor([[-0.2312, -3.6496,  0.1312],
        [-1.0428,  3.4292, -0.1030],
        [-0.5369, -0.9829,  0.0430]])
torch.addcmul(input, value=1, tensor1, tensor2, out=None) → Tensor

Performs the element-wise multiplication of tensor1 by tensor2, multiply the result by the scalar value and add it to input.

outi=inputi+value×tensor1i×tensor2i\text{out}_i = \text{input}_i + \text{value} \times \text{tensor1}_i \times \text{tensor2}_i

The shapes of tensor, tensor1, and tensor2 must be broadcastable.

For inputs of type FloatTensor or DoubleTensor, value must be a real number, otherwise an integer.

Parameters
  • input (Tensor) – the tensor to be added

  • value (Number, optional) – multiplier for tensor1.tensor2tensor1 .* tensor2

  • tensor1 (Tensor) – the tensor to be multiplied

  • tensor2 (Tensor) – the tensor to be multiplied

  • out (Tensor, optional) – the output tensor.

Example:

>>> t = torch.randn(1, 3)
>>> t1 = torch.randn(3, 1)
>>> t2 = torch.randn(1, 3)
>>> torch.addcmul(t, 0.1, t1, t2)
tensor([[-0.8635, -0.6391,  1.6174],
        [-0.7617, -0.5879,  1.7388],
        [-0.8353, -0.6249,  1.6511]])
torch.angle(input, out=None) → Tensor

Computes the element-wise angle (in radians) of the given input tensor.

outi=angle(inputi)\text{out}_{i} = angle(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.angle(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))*180/3.14159
tensor([ 135.,  135,  325])
torch.asin(input, out=None) → Tensor

Returns a new tensor with the arcsine of the elements of input.

outi=sin1(inputi)\text{out}_{i} = \sin^{-1}(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-0.5962,  1.4985, -0.4396,  1.4525])
>>> torch.asin(a)
tensor([-0.6387,     nan, -0.4552,     nan])
torch.atan(input, out=None) → Tensor

Returns a new tensor with the arctangent of the elements of input.

outi=tan1(inputi)\text{out}_{i} = \tan^{-1}(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.2341,  0.2539, -0.6256, -0.6448])
>>> torch.atan(a)
tensor([ 0.2299,  0.2487, -0.5591, -0.5727])
torch.atan2(input, other, out=None) → Tensor

Element-wise arctangent of inputi/otheri\text{input}_{i} / \text{other}_{i} with consideration of the quadrant. Returns a new tensor with the signed angles in radians between vector (otheri,inputi)(\text{other}_{i}, \text{input}_{i}) and vector (1,0)(1, 0) . (Note that otheri\text{other}_{i} , the second parameter, is the x-coordinate, while inputi\text{input}_{i} , the first parameter, is the y-coordinate.)

The shapes of input and other must be broadcastable.

Parameters
  • input (Tensor) – the first input tensor

  • other (Tensor) – the second input tensor

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.9041,  0.0196, -0.3108, -2.4423])
>>> torch.atan2(a, torch.randn(4))
tensor([ 0.9833,  0.0811, -1.9743, -1.4151])
torch.bitwise_not(input, out=None) → Tensor

Computes the bitwise NOT of the given input tensor. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical NOT.

Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example

>>> torch.bitwise_not(torch.tensor([-1, -2, 3], dtype=torch.int8))
tensor([ 0,  1, -4], dtype=torch.int8)
torch.ceil(input, out=None) → Tensor

Returns a new tensor with the ceil of the elements of input, the smallest integer greater than or equal to each element.

outi=inputi=inputi+1\text{out}_{i} = \left\lceil \text{input}_{i} \right\rceil = \left\lfloor \text{input}_{i} \right\rfloor + 1
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-0.6341, -1.4208, -1.0900,  0.5826])
>>> torch.ceil(a)
tensor([-0., -1., -1.,  1.])
torch.clamp(input, min, max, out=None) → Tensor

Clamp all elements in input into the range [ min, max ] and return a resulting tensor:

yi={minif xi<minxiif minximaxmaxif xi>maxy_i = \begin{cases} \text{min} & \text{if } x_i < \text{min} \\ x_i & \text{if } \text{min} \leq x_i \leq \text{max} \\ \text{max} & \text{if } x_i > \text{max} \end{cases}

If input is of type FloatTensor or DoubleTensor, args min and max must be real numbers, otherwise they should be integers.

Parameters
  • input (Tensor) – the input tensor.

  • min (Number) – lower-bound of the range to be clamped to

  • max (Number) – upper-bound of the range to be clamped to

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-1.7120,  0.1734, -0.0478, -0.0922])
>>> torch.clamp(a, min=-0.5, max=0.5)
tensor([-0.5000,  0.1734, -0.0478, -0.0922])
torch.clamp(input, *, min, out=None) → Tensor

Clamps all elements in input to be larger or equal min.

If input is of type FloatTensor or DoubleTensor, value should be a real number, otherwise it should be an integer.

Parameters
  • input (Tensor) – the input tensor.

  • value (Number) – minimal value of each element in the output

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-0.0299, -2.3184,  2.1593, -0.8883])
>>> torch.clamp(a, min=0.5)
tensor([ 0.5000,  0.5000,  2.1593,  0.5000])
torch.clamp(input, *, max, out=None) → Tensor

Clamps all elements in input to be smaller or equal max.

If input is of type FloatTensor or DoubleTensor, value should be a real number, otherwise it should be an integer.

Parameters
  • input (Tensor) – the input tensor.

  • value (Number) – maximal value of each element in the output

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.7753, -0.4702, -0.4599,  1.1899])
>>> torch.clamp(a, max=0.5)
tensor([ 0.5000, -0.4702, -0.4599,  0.5000])
torch.conj(input, out=None) → Tensor

Computes the element-wise conjugate of the given input tensor.

outi=conj(inputi)\text{out}_{i} = conj(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.conj(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))
tensor([-1 - 1j, -2 - 2j, 3 + 3j])
torch.cos(input, out=None) → Tensor

Returns a new tensor with the cosine of the elements of input.

outi=cos(inputi)\text{out}_{i} = \cos(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 1.4309,  1.2706, -0.8562,  0.9796])
>>> torch.cos(a)
tensor([ 0.1395,  0.2957,  0.6553,  0.5574])
torch.cosh(input, out=None) → Tensor

Returns a new tensor with the hyperbolic cosine of the elements of input.

outi=cosh(inputi)\text{out}_{i} = \cosh(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.1632,  1.1835, -0.6979, -0.7325])
>>> torch.cosh(a)
tensor([ 1.0133,  1.7860,  1.2536,  1.2805])
torch.div()
torch.div(input, other, out=None) → Tensor

Divides each element of the input input with the scalar other and returns a new resulting tensor.

outi=inputiother\text{out}_i = \frac{\text{input}_i}{\text{other}}

If the torch.dtype of input and other differ, the torch.dtype of the result tensor is determined following rules described in the type promotion documentation. If out is specified, the result must be castable to the torch.dtype of the specified output tensor. Integral division by zero leads to undefined behavior.

Parameters
  • input (Tensor) – the input tensor.

  • other (Number) – the number to be divided to each element of input

Keyword Arguments

out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(5)
>>> a
tensor([ 0.3810,  1.2774, -0.2972, -0.3719,  0.4637])
>>> torch.div(a, 0.5)
tensor([ 0.7620,  2.5548, -0.5944, -0.7439,  0.9275])
torch.div(input, other, out=None) → Tensor

Each element of the tensor input is divided by each element of the tensor other. The resulting tensor is returned.

outi=inputiotheri\text{out}_i = \frac{\text{input}_i}{\text{other}_i}

The shapes of input and other must be broadcastable. If the torch.dtype of input and other differ, the torch.dtype of the result tensor is determined following rules described in the type promotion documentation. If out is specified, the result must be castable to the torch.dtype of the specified output tensor. Integral division by zero leads to undefined behavior.

Parameters
  • input (Tensor) – the numerator tensor

  • other (Tensor) – the denominator tensor

Keyword Arguments

out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[-0.3711, -1.9353, -0.4605, -0.2917],
        [ 0.1815, -1.0111,  0.9805, -1.5923],
        [ 0.1062,  1.4581,  0.7759, -1.2344],
        [-0.1830, -0.0313,  1.1908, -1.4757]])
>>> b = torch.randn(4)
>>> b
tensor([ 0.8032,  0.2930, -0.8113, -0.2308])
>>> torch.div(a, b)
tensor([[-0.4620, -6.6051,  0.5676,  1.2637],
        [ 0.2260, -3.4507, -1.2086,  6.8988],
        [ 0.1322,  4.9764, -0.9564,  5.3480],
        [-0.2278, -0.1068, -1.4678,  6.3936]])
torch.digamma(input, out=None) → Tensor

Computes the logarithmic derivative of the gamma function on input.

ψ(x)=ddxln(Γ(x))=Γ(x)Γ(x)\psi(x) = \frac{d}{dx} \ln\left(\Gamma\left(x\right)\right) = \frac{\Gamma'(x)}{\Gamma(x)}
Parameters

input (Tensor) – the tensor to compute the digamma function on

Example:

>>> a = torch.tensor([1, 0.5])
>>> torch.digamma(a)
tensor([-0.5772, -1.9635])
torch.erf(input, out=None) → Tensor

Computes the error function of each element. The error function is defined as follows:

erf(x)=2π0xet2dt\mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.erf(torch.tensor([0, -1., 10.]))
tensor([ 0.0000, -0.8427,  1.0000])
torch.erfc(input, out=None) → Tensor

Computes the complementary error function of each element of input. The complementary error function is defined as follows:

erfc(x)=12π0xet2dt\mathrm{erfc}(x) = 1 - \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.erfc(torch.tensor([0, -1., 10.]))
tensor([ 1.0000, 1.8427,  0.0000])
torch.erfinv(input, out=None) → Tensor

Computes the inverse error function of each element of input. The inverse error function is defined in the range (1,1)(-1, 1) as:

erfinv(erf(x))=x\mathrm{erfinv}(\mathrm{erf}(x)) = x
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.erfinv(torch.tensor([0, 0.5, -1.]))
tensor([ 0.0000,  0.4769,    -inf])
torch.exp(input, out=None) → Tensor

Returns a new tensor with the exponential of the elements of the input tensor input.

yi=exiy_{i} = e^{x_{i}}
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.exp(torch.tensor([0, math.log(2.)]))
tensor([ 1.,  2.])
torch.expm1(input, out=None) → Tensor

Returns a new tensor with the exponential of the elements minus 1 of input.

yi=exi1y_{i} = e^{x_{i}} - 1
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.expm1(torch.tensor([0, math.log(2.)]))
tensor([ 0.,  1.])
torch.floor(input, out=None) → Tensor

Returns a new tensor with the floor of the elements of input, the largest integer less than or equal to each element.

outi=inputi\text{out}_{i} = \left\lfloor \text{input}_{i} \right\rfloor
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-0.8166,  1.5308, -0.2530, -0.2091])
>>> torch.floor(a)
tensor([-1.,  1., -1., -1.])
torch.fmod(input, other, out=None) → Tensor

Computes the element-wise remainder of division.

The dividend and divisor may contain both for integer and floating point numbers. The remainder has the same sign as the dividend input.

When other is a tensor, the shapes of input and other must be broadcastable.

Parameters
  • input (Tensor) – the dividend

  • other (Tensor or float) – the divisor, which may be either a number or a tensor of the same shape as the dividend

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.fmod(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)
tensor([-1., -0., -1.,  1.,  0.,  1.])
>>> torch.fmod(torch.tensor([1., 2, 3, 4, 5]), 1.5)
tensor([ 1.0000,  0.5000,  0.0000,  1.0000,  0.5000])
torch.frac(input, out=None) → Tensor

Computes the fractional portion of each element in input.

outi=inputiinputisgn(inputi)\text{out}_{i} = \text{input}_{i} - \left\lfloor |\text{input}_{i}| \right\rfloor * \operatorname{sgn}(\text{input}_{i})

Example:

>>> torch.frac(torch.tensor([1, 2.5, -3.2]))
tensor([ 0.0000,  0.5000, -0.2000])
torch.imag(input, out=None) → Tensor

Computes the element-wise imag value of the given input tensor.

outi=imag(inputi)\text{out}_{i} = imag(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.imag(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))
tensor([ 1,  2,  -3])
torch.lerp(input, end, weight, out=None)

Does a linear interpolation of two tensors start (given by input) and end based on a scalar or tensor weight and returns the resulting out tensor.

outi=starti+weighti×(endistarti)\text{out}_i = \text{start}_i + \text{weight}_i \times (\text{end}_i - \text{start}_i)

The shapes of start and end must be broadcastable. If weight is a tensor, then the shapes of weight, start, and end must be broadcastable.

Parameters
  • input (Tensor) – the tensor with the starting points

  • end (Tensor) – the tensor with the ending points

  • weight (float or tensor) – the weight for the interpolation formula

  • out (Tensor, optional) – the output tensor.

Example:

>>> start = torch.arange(1., 5.)
>>> end = torch.empty(4).fill_(10)
>>> start
tensor([ 1.,  2.,  3.,  4.])
>>> end
tensor([ 10.,  10.,  10.,  10.])
>>> torch.lerp(start, end, 0.5)
tensor([ 5.5000,  6.0000,  6.5000,  7.0000])
>>> torch.lerp(start, end, torch.full_like(start, 0.5))
tensor([ 5.5000,  6.0000,  6.5000,  7.0000])
torch.log(input, out=None) → Tensor

Returns a new tensor with the natural logarithm of the elements of input.

yi=loge(xi)y_{i} = \log_{e} (x_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(5)
>>> a
tensor([-0.7168, -0.5471, -0.8933, -1.4428, -0.1190])
>>> torch.log(a)
tensor([ nan,  nan,  nan,  nan,  nan])
torch.log10(input, out=None) → Tensor

Returns a new tensor with the logarithm to the base 10 of the elements of input.

yi=log10(xi)y_{i} = \log_{10} (x_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.rand(5)
>>> a
tensor([ 0.5224,  0.9354,  0.7257,  0.1301,  0.2251])


>>> torch.log10(a)
tensor([-0.2820, -0.0290, -0.1392, -0.8857, -0.6476])
torch.log1p(input, out=None) → Tensor

Returns a new tensor with the natural logarithm of (1 + input).

yi=loge(xi+1)y_i = \log_{e} (x_i + 1)

Note

This function is more accurate than torch.log() for small values of input

Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(5)
>>> a
tensor([-1.0090, -0.9923,  1.0249, -0.5372,  0.2492])
>>> torch.log1p(a)
tensor([    nan, -4.8653,  0.7055, -0.7705,  0.2225])
torch.log2(input, out=None) → Tensor

Returns a new tensor with the logarithm to the base 2 of the elements of input.

yi=log2(xi)y_{i} = \log_{2} (x_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.rand(5)
>>> a
tensor([ 0.8419,  0.8003,  0.9971,  0.5287,  0.0490])


>>> torch.log2(a)
tensor([-0.2483, -0.3213, -0.0042, -0.9196, -4.3504])
torch.logical_not(input, out=None) → Tensor

Computes the element-wise logical NOT of the given input tensor. If not specified, the output tensor will have the bool dtype. If the input tensor is not a bool tensor, zeros are treated as False and non-zeros are treated as True.

Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.logical_not(torch.tensor([True, False]))
tensor([ False,  True])
>>> torch.logical_not(torch.tensor([0, 1, -10], dtype=torch.int8))
tensor([ True, False, False])
>>> torch.logical_not(torch.tensor([0., 1.5, -10.], dtype=torch.double))
tensor([ True, False, False])
>>> torch.logical_not(torch.tensor([0., 1., -10.], dtype=torch.double), out=torch.empty(3, dtype=torch.int16))
tensor([1, 0, 0], dtype=torch.int16)
torch.logical_xor(input, other, out=None) → Tensor

Computes the element-wise logical XOR of the given input tensors. Both input tensors must have the bool dtype.

Parameters
  • input (Tensor) – the input tensor.

  • other (Tensor) – the tensor to compute XOR with

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.logical_xor(torch.tensor([True, False, True]), torch.tensor([True, False, False]))
tensor([ False, False,  True])
torch.mul()
torch.mul(input, other, out=None)

Multiplies each element of the input input with the scalar other and returns a new resulting tensor.

outi=other×inputi\text{out}_i = \text{other} \times \text{input}_i

If input is of type FloatTensor or DoubleTensor, other should be a real number, otherwise it should be an integer

Parameters
  • {input}

  • value (Number) – the number to be multiplied to each element of input

  • {out}

Example:

>>> a = torch.randn(3)
>>> a
tensor([ 0.2015, -0.4255,  2.6087])
>>> torch.mul(a, 100)
tensor([  20.1494,  -42.5491,  260.8663])
torch.mul(input, other, out=None)

Each element of the tensor input is multiplied by the corresponding element of the Tensor other. The resulting tensor is returned.

The shapes of input and other must be broadcastable.

outi=inputi×otheri\text{out}_i = \text{input}_i \times \text{other}_i
Parameters
  • input (Tensor) – the first multiplicand tensor

  • other (Tensor) – the second multiplicand tensor

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4, 1)
>>> a
tensor([[ 1.1207],
        [-0.3137],
        [ 0.0700],
        [ 0.8378]])
>>> b = torch.randn(1, 4)
>>> b
tensor([[ 0.5146,  0.1216, -0.5244,  2.2382]])
>>> torch.mul(a, b)
tensor([[ 0.5767,  0.1363, -0.5877,  2.5083],
        [-0.1614, -0.0382,  0.1645, -0.7021],
        [ 0.0360,  0.0085, -0.0367,  0.1567],
        [ 0.4312,  0.1019, -0.4394,  1.8753]])
torch.mvlgamma(input, p) → Tensor

Computes the multivariate log-gamma function ([reference]) with dimension pp element-wise, given by

log(Γp(a))=C+i=1plog(Γ(ai12))\log(\Gamma_{p}(a)) = C + \displaystyle \sum_{i=1}^{p} \log\left(\Gamma\left(a - \frac{i - 1}{2}\right)\right)

where C=log(π)×p(p1)4C = \log(\pi) \times \frac{p (p - 1)}{4} and Γ()\Gamma(\cdot) is the Gamma function.

If any of the elements are less than or equal to p12\frac{p - 1}{2} , then an error is thrown.

Parameters
  • input (Tensor) – the tensor to compute the multivariate log-gamma function

  • p (int) – the number of dimensions

Example:

>>> a = torch.empty(2, 3).uniform_(1, 2)
>>> a
tensor([[1.6835, 1.8474, 1.1929],
        [1.0475, 1.7162, 1.4180]])
>>> torch.mvlgamma(a, 2)
tensor([[0.3928, 0.4007, 0.7586],
        [1.0311, 0.3901, 0.5049]])
torch.neg(input, out=None) → Tensor

Returns a new tensor with the negative of the elements of input.

out=1×input\text{out} = -1 \times \text{input}
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(5)
>>> a
tensor([ 0.0090, -0.2262, -0.0682, -0.2866,  0.3940])
>>> torch.neg(a)
tensor([-0.0090,  0.2262,  0.0682,  0.2866, -0.3940])
torch.pow()
torch.pow(input, exponent, out=None) → Tensor

Takes the power of each element in input with exponent and returns a tensor with the result.

exponent can be either a single float number or a Tensor with the same number of elements as input.

When exponent is a scalar value, the operation applied is:

outi=xiexponent\text{out}_i = x_i ^ \text{exponent}

When exponent is a tensor, the operation applied is:

outi=xiexponenti\text{out}_i = x_i ^ {\text{exponent}_i}

When exponent is a tensor, the shapes of input and exponent must be broadcastable.

Parameters
  • input (Tensor) – the input tensor.

  • exponent (float or tensor) – the exponent value

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.4331,  1.2475,  0.6834, -0.2791])
>>> torch.pow(a, 2)
tensor([ 0.1875,  1.5561,  0.4670,  0.0779])
>>> exp = torch.arange(1., 5.)

>>> a = torch.arange(1., 5.)
>>> a
tensor([ 1.,  2.,  3.,  4.])
>>> exp
tensor([ 1.,  2.,  3.,  4.])
>>> torch.pow(a, exp)
tensor([   1.,    4.,   27.,  256.])
torch.pow(self, exponent, out=None) → Tensor

self is a scalar float value, and exponent is a tensor. The returned tensor out is of the same shape as exponent

The operation applied is:

outi=selfexponenti\text{out}_i = \text{self} ^ {\text{exponent}_i}
Parameters
  • self (float) – the scalar base value for the power operation

  • exponent (Tensor) – the exponent tensor

  • out (Tensor, optional) – the output tensor.

Example:

>>> exp = torch.arange(1., 5.)
>>> base = 2
>>> torch.pow(base, exp)
tensor([  2.,   4.,   8.,  16.])
torch.real(input, out=None) → Tensor

Computes the element-wise real value of the given input tensor.

outi=real(inputi)\text{out}_{i} = real(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.real(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))
tensor([ -1,  -2,  3])
torch.reciprocal(input, out=None) → Tensor

Returns a new tensor with the reciprocal of the elements of input

outi=1inputi\text{out}_{i} = \frac{1}{\text{input}_{i}}
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-0.4595, -2.1219, -1.4314,  0.7298])
>>> torch.reciprocal(a)
tensor([-2.1763, -0.4713, -0.6986,  1.3702])
torch.remainder(input, other, out=None) → Tensor

Computes the element-wise remainder of division.

The divisor and dividend may contain both for integer and floating point numbers. The remainder has the same sign as the divisor.

When other is a tensor, the shapes of input and other must be broadcastable.

Parameters
  • input (Tensor) – the dividend

  • other (Tensor or float) – the divisor that may be either a number or a Tensor of the same shape as the dividend

  • out (Tensor, optional) – the output tensor.

Example:

>>> torch.remainder(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)
tensor([ 1.,  0.,  1.,  1.,  0.,  1.])
>>> torch.remainder(torch.tensor([1., 2, 3, 4, 5]), 1.5)
tensor([ 1.0000,  0.5000,  0.0000,  1.0000,  0.5000])

See also

torch.fmod(), which computes the element-wise remainder of division equivalently to the C library function fmod().

torch.round(input, out=None) → Tensor

Returns a new tensor with each of the elements of input rounded to the closest integer.

Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.9920,  0.6077,  0.9734, -1.0362])
>>> torch.round(a)
tensor([ 1.,  1.,  1., -1.])
torch.rsqrt(input, out=None) → Tensor

Returns a new tensor with the reciprocal of the square-root of each of the elements of input.

outi=1inputi\text{out}_{i} = \frac{1}{\sqrt{\text{input}_{i}}}
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-0.0370,  0.2970,  1.5420, -0.9105])
>>> torch.rsqrt(a)
tensor([    nan,  1.8351,  0.8053,     nan])
torch.sigmoid(input, out=None) → Tensor

Returns a new tensor with the sigmoid of the elements of input.

outi=11+einputi\text{out}_{i} = \frac{1}{1 + e^{-\text{input}_{i}}}
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.9213,  1.0887, -0.8858, -1.7683])
>>> torch.sigmoid(a)
tensor([ 0.7153,  0.7481,  0.2920,  0.1458])
torch.sign(input, out=None) → Tensor

Returns a new tensor with the signs of the elements of input.

outi=sgn(inputi)\text{out}_{i} = \operatorname{sgn}(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.tensor([0.7, -1.2, 0., 2.3])
>>> a
tensor([ 0.7000, -1.2000,  0.0000,  2.3000])
>>> torch.sign(a)
tensor([ 1., -1.,  0.,  1.])
torch.sin(input, out=None) → Tensor

Returns a new tensor with the sine of the elements of input.

outi=sin(inputi)\text{out}_{i} = \sin(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-0.5461,  0.1347, -2.7266, -0.2746])
>>> torch.sin(a)
tensor([-0.5194,  0.1343, -0.4032, -0.2711])
torch.sinh(input, out=None) → Tensor

Returns a new tensor with the hyperbolic sine of the elements of input.

outi=sinh(inputi)\text{out}_{i} = \sinh(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.5380, -0.8632, -0.1265,  0.9399])
>>> torch.sinh(a)
tensor([ 0.5644, -0.9744, -0.1268,  1.0845])
torch.sqrt(input, out=None) → Tensor

Returns a new tensor with the square-root of the elements of input.

outi=inputi\text{out}_{i} = \sqrt{\text{input}_{i}}
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-2.0755,  1.0226,  0.0831,  0.4806])
>>> torch.sqrt(a)
tensor([    nan,  1.0112,  0.2883,  0.6933])
torch.tan(input, out=None) → Tensor

Returns a new tensor with the tangent of the elements of input.

outi=tan(inputi)\text{out}_{i} = \tan(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([-1.2027, -1.7687,  0.4412, -1.3856])
>>> torch.tan(a)
tensor([-2.5930,  4.9859,  0.4722, -5.3366])
torch.tanh(input, out=None) → Tensor

Returns a new tensor with the hyperbolic tangent of the elements of input.

outi=tanh(inputi)\text{out}_{i} = \tanh(\text{input}_{i})
Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.8986, -0.7279,  1.1745,  0.2611])
>>> torch.tanh(a)
tensor([ 0.7156, -0.6218,  0.8257,  0.2553])
torch.trunc(input, out=None) → Tensor

Returns a new tensor with the truncated integer values of the elements of input.

Parameters
  • input (Tensor) – the input tensor.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 3.4742,  0.5466, -0.8008, -0.9079])
>>> torch.trunc(a)
tensor([ 3.,  0., -0., -0.])

Reduction Ops

torch.argmax()
torch.argmax(input) → LongTensor

Returns the indices of the maximum value of all elements in the input tensor.

This is the second value returned by torch.max(). See its documentation for the exact semantics of this method.

Parameters

input (Tensor) – the input tensor.

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[ 1.3398,  0.2663, -0.2686,  0.2450],
        [-0.7401, -0.8805, -0.3402, -1.1936],
        [ 0.4907, -1.3948, -1.0691, -0.3132],
        [-1.6092,  0.5419, -0.2993,  0.3195]])
>>> torch.argmax(a)
tensor(0)
torch.argmax(input, dim, keepdim=False) → LongTensor

Returns the indices of the maximum values of a tensor across a dimension.

This is the second value returned by torch.max(). See its documentation for the exact semantics of this method.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int) – the dimension to reduce. If None, the argmax of the flattened input is returned.

  • keepdim (bool) – whether the output tensor has dim retained or not. Ignored if dim=None.

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[ 1.3398,  0.2663, -0.2686,  0.2450],
        [-0.7401, -0.8805, -0.3402, -1.1936],
        [ 0.4907, -1.3948, -1.0691, -0.3132],
        [-1.6092,  0.5419, -0.2993,  0.3195]])
>>> torch.argmax(a, dim=1)
tensor([ 0,  2,  0,  1])
torch.argmin()
torch.argmin(input) → LongTensor

Returns the indices of the minimum value of all elements in the input tensor.

This is the second value returned by torch.min(). See its documentation for the exact semantics of this method.

Parameters

input (Tensor) – the input tensor.

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.1139,  0.2254, -0.1381,  0.3687],
        [ 1.0100, -1.1975, -0.0102, -0.4732],
        [-0.9240,  0.1207, -0.7506, -1.0213],
        [ 1.7809, -1.2960,  0.9384,  0.1438]])
>>> torch.argmin(a)
tensor(13)
torch.argmin(input, dim, keepdim=False, out=None) → LongTensor

Returns the indices of the minimum values of a tensor across a dimension.

This is the second value returned by torch.min(). See its documentation for the exact semantics of this method.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int) – the dimension to reduce. If None, the argmin of the flattened input is returned.

  • keepdim (bool) – whether the output tensor has dim retained or not. Ignored if dim=None.

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.1139,  0.2254, -0.1381,  0.3687],
        [ 1.0100, -1.1975, -0.0102, -0.4732],
        [-0.9240,  0.1207, -0.7506, -1.0213],
        [ 1.7809, -1.2960,  0.9384,  0.1438]])
>>> torch.argmin(a, dim=1)
tensor([ 2,  1,  3,  1])
torch.cumprod(input, dim, out=None, dtype=None) → Tensor

Returns the cumulative product of elements of input in the dimension dim.

For example, if input is a vector of size N, the result will also be a vector of size N, with elements.

yi=x1×x2×x3××xiy_i = x_1 \times x_2\times x_3\times \dots \times x_i
Parameters
  • input (Tensor) – the input tensor.

  • dim (int) – the dimension to do the operation over

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(10)
>>> a
tensor([ 0.6001,  0.2069, -0.1919,  0.9792,  0.6727,  1.0062,  0.4126,
        -0.2129, -0.4206,  0.1968])
>>> torch.cumprod(a, dim=0)
tensor([ 0.6001,  0.1241, -0.0238, -0.0233, -0.0157, -0.0158, -0.0065,
         0.0014, -0.0006, -0.0001])

>>> a[5] = 0.0
>>> torch.cumprod(a, dim=0)
tensor([ 0.6001,  0.1241, -0.0238, -0.0233, -0.0157, -0.0000, -0.0000,
         0.0000, -0.0000, -0.0000])
torch.cumsum(input, dim, out=None, dtype=None) → Tensor

Returns the cumulative sum of elements of input in the dimension dim.

For example, if input is a vector of size N, the result will also be a vector of size N, with elements.

yi=x1+x2+x3++xiy_i = x_1 + x_2 + x_3 + \dots + x_i
Parameters
  • input (Tensor) – the input tensor.

  • dim (int) – the dimension to do the operation over

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(10)
>>> a
tensor([-0.8286, -0.4890,  0.5155,  0.8443,  0.1865, -0.1752, -2.0595,
         0.1850, -1.1571, -0.4243])
>>> torch.cumsum(a, dim=0)
tensor([-0.8286, -1.3175, -0.8020,  0.0423,  0.2289,  0.0537, -2.0058,
        -1.8209, -2.9780, -3.4022])
torch.dist(input, other, p=2) → Tensor

Returns the p-norm of (input - other)

The shapes of input and other must be broadcastable.

Parameters
  • input (Tensor) – the input tensor.

  • other (Tensor) – the Right-hand-side input tensor

  • p (float, optional) – the norm to be computed

Example:

>>> x = torch.randn(4)
>>> x
tensor([-1.5393, -0.8675,  0.5916,  1.6321])
>>> y = torch.randn(4)
>>> y
tensor([ 0.0967, -1.0511,  0.6295,  0.8360])
>>> torch.dist(x, y, 3.5)
tensor(1.6727)
>>> torch.dist(x, y, 3)
tensor(1.6973)
>>> torch.dist(x, y, 0)
tensor(inf)
>>> torch.dist(x, y, 1)
tensor(2.6537)
torch.logsumexp(input, dim, keepdim=False, out=None)

Returns the log of summed exponentials of each row of the input tensor in the given dimension dim. The computation is numerically stabilized.

For summation index jj given by dim and other indices ii , the result is

logsumexp(x)i=logjexp(xij)\text{logsumexp}(x)_{i} = \log \sum_j \exp(x_{ij})

If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

Parameters
  • input (Tensor) – the input tensor.

  • dim (int or tuple of python:ints) – the dimension or dimensions to reduce.

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • out (Tensor, optional) – the output tensor.

Example::
>>> a = torch.randn(3, 3)
>>> torch.logsumexp(a, 1)
tensor([ 0.8442,  1.4322,  0.8711])
torch.mean()
torch.mean(input) → Tensor

Returns the mean value of all elements in the input tensor.

Parameters

input (Tensor) – the input tensor.

Example:

>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.2294, -0.5481,  1.3288]])
>>> torch.mean(a)
tensor(0.3367)
torch.mean(input, dim, keepdim=False, out=None) → Tensor

Returns the mean value of each row of the input tensor in the given dimension dim. If dim is a list of dimensions, reduce over all of them.

If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

Parameters
  • input (Tensor) – the input tensor.

  • dim (int or tuple of python:ints) – the dimension or dimensions to reduce.

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[-0.3841,  0.6320,  0.4254, -0.7384],
        [-0.9644,  1.0131, -0.6549, -1.4279],
        [-0.2951, -1.3350, -0.7694,  0.5600],
        [ 1.0842, -0.9580,  0.3623,  0.2343]])
>>> torch.mean(a, 1)
tensor([-0.0163, -0.5085, -0.4599,  0.1807])
>>> torch.mean(a, 1, True)
tensor([[-0.0163],
        [-0.5085],
        [-0.4599],
        [ 0.1807]])
torch.median()
torch.median(input) → Tensor

Returns the median value of all elements in the input tensor.

Parameters

input (Tensor) – the input tensor.

Example:

>>> a = torch.randn(1, 3)
>>> a
tensor([[ 1.5219, -1.5212,  0.2202]])
>>> torch.median(a)
tensor(0.2202)
torch.median(input, dim=-1, keepdim=False, values=None, indices=None) -> (Tensor, LongTensor)

Returns a namedtuple (values, indices) where values is the median value of each row of the input tensor in the given dimension dim. And indices is the index location of each median value found.

By default, dim is the last dimension of the input tensor.

If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the outputs tensor having 1 fewer dimension than input.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int) – the dimension to reduce.

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • values (Tensor, optional) – the output tensor

  • indices (Tensor, optional) – the output index tensor

Example:

>>> a = torch.randn(4, 5)
>>> a
tensor([[ 0.2505, -0.3982, -0.9948,  0.3518, -1.3131],
        [ 0.3180, -0.6993,  1.0436,  0.0438,  0.2270],
        [-0.2751,  0.7303,  0.2192,  0.3321,  0.2488],
        [ 1.0778, -1.9510,  0.7048,  0.4742, -0.7125]])
>>> torch.median(a, 1)
torch.return_types.median(values=tensor([-0.3982,  0.2270,  0.2488,  0.4742]), indices=tensor([1, 4, 4, 3]))
torch.mode(input, dim=-1, keepdim=False, values=None, indices=None) -> (Tensor, LongTensor)

Returns a namedtuple (values, indices) where values is the mode value of each row of the input tensor in the given dimension dim, i.e. a value which appears most often in that row, and indices is the index location of each mode value found.

By default, dim is the last dimension of the input tensor.

If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensors having 1 fewer dimension than input.

Note

This function is not defined for torch.cuda.Tensor yet.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int) – the dimension to reduce.

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • values (Tensor, optional) – the output tensor

  • indices (Tensor, optional) – the output index tensor

Example:

>>> a = torch.randint(10, (5,))
>>> a
tensor([6, 5, 1, 0, 2])
>>> b = a + (torch.randn(50, 1) * 5).long()
>>> torch.mode(b, 0)
torch.return_types.mode(values=tensor([6, 5, 1, 0, 2]), indices=tensor([2, 2, 2, 2, 2]))
torch.norm(input, p='fro', dim=None, keepdim=False, out=None, dtype=None)[source]

Returns the matrix norm or vector norm of a given tensor.

Parameters
  • input (Tensor) – the input tensor

  • p (int, float, inf, -inf, 'fro', 'nuc', optional) –

    the order of norm. Default: 'fro' The following norms can be calculated:

    ord

    matrix norm

    vector norm

    None

    Frobenius norm

    2-norm

    ’fro’

    Frobenius norm

    ‘nuc’

    nuclear norm

    Other

    as vec norm when dim is None

    sum(abs(x)**ord)**(1./ord)

  • dim (int, 2-tuple of python:ints, 2-list of python:ints, optional) – If it is an int, vector norm will be calculated, if it is 2-tuple of ints, matrix norm will be calculated. If the value is None, matrix norm will be calculated when the input tensor only has two dimensions, vector norm will be calculated when the input tensor only has one dimension. If the input tensor has more than two dimensions, the vector norm will be applied to last dimension.

  • keepdim (bool, optional) – whether the output tensors have dim retained or not. Ignored if dim = None and out = None. Default: False

  • out (Tensor, optional) – the output tensor. Ignored if dim = None and out = None.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to :attr:’dtype’ while performing the operation. Default: None.

Example:

>>> import torch
>>> a = torch.arange(9, dtype= torch.float) - 4
>>> b = a.reshape((3, 3))
>>> torch.norm(a)
tensor(7.7460)
>>> torch.norm(b)
tensor(7.7460)
>>> torch.norm(a, float('inf'))
tensor(4.)
>>> torch.norm(b, float('inf'))
tensor(4.)
>>> c = torch.tensor([[ 1, 2, 3],[-1, 1, 4]] , dtype= torch.float)
>>> torch.norm(c, dim=0)
tensor([1.4142, 2.2361, 5.0000])
>>> torch.norm(c, dim=1)
tensor([3.7417, 4.2426])
>>> torch.norm(c, p=1, dim=1)
tensor([6., 6.])
>>> d = torch.arange(8, dtype= torch.float).reshape(2,2,2)
>>> torch.norm(d, dim=(1,2))
tensor([ 3.7417, 11.2250])
>>> torch.norm(d[0, :, :]), torch.norm(d[1, :, :])
(tensor(3.7417), tensor(11.2250))
torch.prod()
torch.prod(input, dtype=None) → Tensor

Returns the product of all elements in the input tensor.

Parameters
  • input (Tensor) – the input tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None.

Example:

>>> a = torch.randn(1, 3)
>>> a
tensor([[-0.8020,  0.5428, -1.5854]])
>>> torch.prod(a)
tensor(0.6902)
torch.prod(input, dim, keepdim=False, dtype=None) → Tensor

Returns the product of each row of the input tensor in the given dimension dim.

If keepdim is True, the output tensor is of the same size as input except in the dimension dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 fewer dimension than input.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int) – the dimension to reduce.

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None.

Example:

>>> a = torch.randn(4, 2)
>>> a
tensor([[ 0.5261, -0.3837],
        [ 1.1857, -0.2498],
        [-1.1646,  0.0705],
        [ 1.1131, -1.0629]])
>>> torch.prod(a, 1)
tensor([-0.2018, -0.2962, -0.0821, -1.1831])
torch.std()
torch.std(input, unbiased=True) → Tensor

Returns the standard-deviation of all elements in the input tensor.

If unbiased is False, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.

Parameters
  • input (Tensor) – the input tensor.

  • unbiased (bool) – whether to use the unbiased estimation or not

Example:

>>> a = torch.randn(1, 3)
>>> a
tensor([[-0.8166, -1.3802, -0.3560]])
>>> torch.std(a)
tensor(0.5130)
torch.std(input, dim, keepdim=False, unbiased=True, out=None) → Tensor

Returns the standard-deviation of each row of the input tensor in the dimension dim. If dim is a list of dimensions, reduce over all of them.

If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

If unbiased is False, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int or tuple of python:ints) – the dimension or dimensions to reduce.

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • unbiased (bool) – whether to use the unbiased estimation or not

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.2035,  1.2959,  1.8101, -0.4644],
        [ 1.5027, -0.3270,  0.5905,  0.6538],
        [-1.5745,  1.3330, -0.5596, -0.6548],
        [ 0.1264, -0.5080,  1.6420,  0.1992]])
>>> torch.std(a, dim=1)
tensor([ 1.0311,  0.7477,  1.2204,  0.9087])
torch.std_mean()
torch.std_mean(input, unbiased=True) -> (Tensor, Tensor)

Returns the standard-deviation and mean of all elements in the input tensor.

If unbiased is False, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.

Parameters
  • input (Tensor) – the input tensor.

  • unbiased (bool) – whether to use the unbiased estimation or not

Example:

>>> a = torch.randn(1, 3)
>>> a
tensor([[0.3364, 0.3591, 0.9462]])
>>> torch.std_mean(a)
(tensor(0.3457), tensor(0.5472))
torch.std(input, dim, keepdim=False, unbiased=True) -> (Tensor, Tensor)

Returns the standard-deviation and mean of each row of the input tensor in the dimension dim. If dim is a list of dimensions, reduce over all of them.

If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

If unbiased is False, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int or tuple of python:ints) – the dimension or dimensions to reduce.

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • unbiased (bool) – whether to use the unbiased estimation or not

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.5648, -0.5984, -1.2676, -1.4471],
        [ 0.9267,  1.0612,  1.1050, -0.6014],
        [ 0.0154,  1.9301,  0.0125, -1.0904],
        [-1.9711, -0.7748, -1.3840,  0.5067]])
>>> torch.std_mean(a, 1)
(tensor([0.9110, 0.8197, 1.2552, 1.0608]), tensor([-0.6871,  0.6229,  0.2169, -0.9058]))
torch.sum()
torch.sum(input, dtype=None) → Tensor

Returns the sum of all elements in the input tensor.

Parameters
  • input (Tensor) – the input tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None.

Example:

>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.1133, -0.9567,  0.2958]])
>>> torch.sum(a)
tensor(-0.5475)
torch.sum(input, dim, keepdim=False, dtype=None) → Tensor

Returns the sum of each row of the input tensor in the given dimension dim. If dim is a list of dimensions, reduce over all of them.

If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

Parameters
  • input (Tensor) – the input tensor.

  • dim (int or tuple of python:ints) – the dimension or dimensions to reduce.

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None.

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.0569, -0.2475,  0.0737, -0.3429],
        [-0.2993,  0.9138,  0.9337, -1.6864],
        [ 0.1132,  0.7892, -0.1003,  0.5688],
        [ 0.3637, -0.9906, -0.4752, -1.5197]])
>>> torch.sum(a, 1)
tensor([-0.4598, -0.1381,  1.3708, -2.6217])
>>> b = torch.arange(4 * 5 * 6).view(4, 5, 6)
>>> torch.sum(b, (2, 1))
tensor([  435.,  1335.,  2235.,  3135.])
torch.unique(input, sorted=True, return_inverse=False, return_counts=False, dim=None)[source]

Returns the unique elements of the input tensor.

Parameters
  • input (Tensor) – the input tensor

  • sorted (bool) – Whether to sort the unique elements in ascending order before returning as output.

  • return_inverse (bool) – Whether to also return the indices for where elements in the original input ended up in the returned unique list.

  • return_counts (bool) – Whether to also return the counts for each unique element.

  • dim (int) – the dimension to apply unique. If None, the unique of the flattened input is returned. default: None

Returns

A tensor or a tuple of tensors containing

  • output (Tensor): the output list of unique scalar elements.

  • inverse_indices (Tensor): (optional) if return_inverse is True, there will be an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor.

  • counts (Tensor): (optional) if return_counts is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor.

Return type

(Tensor, Tensor (optional), Tensor (optional))

Example:

>>> output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long))
>>> output
tensor([ 2,  3,  1])

>>> output, inverse_indices = torch.unique(
        torch.tensor([1, 3, 2, 3], dtype=torch.long), sorted=True, return_inverse=True)
>>> output
tensor([ 1,  2,  3])
>>> inverse_indices
tensor([ 0,  2,  1,  2])

>>> output, inverse_indices = torch.unique(
        torch.tensor([[1, 3], [2, 3]], dtype=torch.long), sorted=True, return_inverse=True)
>>> output
tensor([ 1,  2,  3])
>>> inverse_indices
tensor([[ 0,  2],
        [ 1,  2]])
torch.unique_consecutive(input, return_inverse=False, return_counts=False, dim=None)[source]

Eliminates all but the first element from every consecutive group of equivalent elements.

Note

This function is different from torch.unique() in the sense that this function only eliminates consecutive duplicate values. This semantics is similar to std::unique in C++.

Parameters
  • input (Tensor) – the input tensor

  • return_inverse (bool) – Whether to also return the indices for where elements in the original input ended up in the returned unique list.

  • return_counts (bool) – Whether to also return the counts for each unique element.

  • dim (int) – the dimension to apply unique. If None, the unique of the flattened input is returned. default: None

Returns

A tensor or a tuple of tensors containing

  • output (Tensor): the output list of unique scalar elements.

  • inverse_indices (Tensor): (optional) if return_inverse is True, there will be an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor.

  • counts (Tensor): (optional) if return_counts is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor.

Return type

(Tensor, Tensor (optional), Tensor (optional))

Example:

>>> x = torch.tensor([1, 1, 2, 2, 3, 1, 1, 2])
>>> output = torch.unique_consecutive(x)
>>> output
tensor([1, 2, 3, 1, 2])

>>> output, inverse_indices = torch.unique_consecutive(x, return_inverse=True)
>>> output
tensor([1, 2, 3, 1, 2])
>>> inverse_indices
tensor([0, 0, 1, 1, 2, 3, 3, 4])

>>> output, counts = torch.unique_consecutive(x, return_counts=True)
>>> output
tensor([1, 2, 3, 1, 2])
>>> counts
tensor([2, 2, 1, 2, 1])
torch.var()
torch.var(input, unbiased=True) → Tensor

Returns the variance of all elements in the input tensor.

If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.

Parameters
  • input (Tensor) – the input tensor.

  • unbiased (bool) – whether to use the unbiased estimation or not

Example:

>>> a = torch.randn(1, 3)
>>> a
tensor([[-0.3425, -1.2636, -0.4864]])
>>> torch.var(a)
tensor(0.2455)
torch.var(input, dim, keepdim=False, unbiased=True, out=None) → Tensor

Returns the variance of each row of the input tensor in the given dimension dim.

If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int or tuple of python:ints) – the dimension or dimensions to reduce.

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • unbiased (bool) – whether to use the unbiased estimation or not

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[-0.3567,  1.7385, -1.3042,  0.7423],
        [ 1.3436, -0.1015, -0.9834, -0.8438],
        [ 0.6056,  0.1089, -0.3112, -1.4085],
        [-0.7700,  0.6074, -0.1469,  0.7777]])
>>> torch.var(a, 1)
tensor([ 1.7444,  1.1363,  0.7356,  0.5112])
torch.var_mean()
torch.var_mean(input, unbiased=True) -> (Tensor, Tensor)

Returns the variance and mean of all elements in the input tensor.

If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.

Parameters
  • input (Tensor) – the input tensor.

  • unbiased (bool) – whether to use the unbiased estimation or not

Example:

>>> a = torch.randn(1, 3)
>>> a
tensor([[0.0146, 0.4258, 0.2211]])
>>> torch.var_mean(a)
(tensor(0.0423), tensor(0.2205))
torch.var_mean(input, dim, keepdim=False, unbiased=True) -> (Tensor, Tensor)

Returns the variance and mean of each row of the input tensor in the given dimension dim.

If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int or tuple of python:ints) – the dimension or dimensions to reduce.

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • unbiased (bool) – whether to use the unbiased estimation or not

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[-1.5650,  2.0415, -0.1024, -0.5790],
        [ 0.2325, -2.6145, -1.6428, -0.3537],
        [-0.2159, -1.1069,  1.2882, -1.3265],
        [-0.6706, -1.5893,  0.6827,  1.6727]])
>>> torch.var_mean(a, 1)
(tensor([2.3174, 1.6403, 1.4092, 2.0791]), tensor([-0.0512, -1.0946, -0.3403,  0.0239]))

Comparison Ops

torch.allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) → bool

This function checks if all input and other satisfy the condition:

inputotheratol+rtol×other\lvert \text{input} - \text{other} \rvert \leq \texttt{atol} + \texttt{rtol} \times \lvert \text{other} \rvert

elementwise, for all elements of input and other. The behaviour of this function is analogous to numpy.allclose

Parameters
  • input (Tensor) – first tensor to compare

  • other (Tensor) – second tensor to compare

  • atol (float, optional) – absolute tolerance. Default: 1e-08

  • rtol (float, optional) – relative tolerance. Default: 1e-05

  • equal_nan (bool, optional) – if True, then two NaN s will be compared as equal. Default: False

Example:

>>> torch.allclose(torch.tensor([10000., 1e-07]), torch.tensor([10000.1, 1e-08]))
False
>>> torch.allclose(torch.tensor([10000., 1e-08]), torch.tensor([10000.1, 1e-09]))
True
>>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]))
False
>>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]), equal_nan=True)
True
torch.argsort(input, dim=-1, descending=False, out=None) → LongTensor

Returns the indices that sort a tensor along a given dimension in ascending order by value.

This is the second value returned by torch.sort(). See its documentation for the exact semantics of this method.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int, optional) – the dimension to sort along

  • descending (bool, optional) – controls the sorting order (ascending or descending)

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.0785,  1.5267, -0.8521,  0.4065],
        [ 0.1598,  0.0788, -0.0745, -1.2700],
        [ 1.2208,  1.0722, -0.7064,  1.2564],
        [ 0.0669, -0.2318, -0.8229, -0.9280]])


>>> torch.argsort(a, dim=1)
tensor([[2, 0, 3, 1],
        [3, 2, 1, 0],
        [2, 1, 0, 3],
        [3, 2, 1, 0]])
torch.eq(input, other, out=None) → Tensor

Computes element-wise equality

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

Parameters
  • input (Tensor) – the tensor to compare

  • other (Tensor or float) – the tensor or value to compare

  • out (Tensor, optional) – the output tensor. Must be a ByteTensor

Returns

A torch.BoolTensor containing a True at each location where comparison is true

Return type

Tensor

Example:

>>> torch.eq(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[ 1,  0],
        [ 0,  1]], dtype=torch.uint8)
torch.equal(input, other) → bool

True if two tensors have the same size and elements, False otherwise.

Example:

>>> torch.equal(torch.tensor([1, 2]), torch.tensor([1, 2]))
True
torch.ge(input, other, out=None) → Tensor

Computes inputother\text{input} \geq \text{other} element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

Parameters
  • input (Tensor) – the tensor to compare

  • other (Tensor or float) – the tensor or value to compare

  • out (Tensor, optional) – the output tensor that must be a BoolTensor

Returns

A torch.BoolTensor containing a True at each location where comparison is true

Return type

Tensor

Example:

>>> torch.ge(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[True, True], [False, True]])
torch.gt(input, other, out=None) → Tensor

Computes input>other\text{input} > \text{other} element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

Parameters
  • input (Tensor) – the tensor to compare

  • other (Tensor or float) – the tensor or value to compare

  • out (Tensor, optional) – the output tensor that must be a BoolTensor

Returns

A torch.BoolTensor containing a True at each location where comparison is true

Return type

Tensor

Example:

>>> torch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[False, True], [False, False]])
torch.isfinite(tensor)[source]

Returns a new tensor with boolean elements representing if each element is Finite or not.

Parameters

tensor (Tensor) – A tensor to check

Returns

A torch.Tensor with dtype torch.bool containing a True at each location of finite elements and False otherwise

Return type

Tensor

Example:

>>> torch.isfinite(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))
tensor([True,  False,  True,  False,  False])
torch.isinf(tensor)[source]

Returns a new tensor with boolean elements representing if each element is +/-INF or not.

Parameters

tensor (Tensor) – A tensor to check

Returns

A torch.Tensor with dtype torch.bool containing a True at each location of +/-INF elements and False otherwise

Return type

Tensor

Example:

>>> torch.isinf(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))
tensor([False,  True,  False,  True,  False])
torch.isnan()

Returns a new tensor with boolean elements representing if each element is NaN or not.

Parameters

input (Tensor) – A tensor to check

Returns

A torch.BoolTensor containing a True at each location of NaN elements.

Return type

Tensor

Example:

>>> torch.isnan(torch.tensor([1, float('nan'), 2]))
tensor([False, True, False])
torch.kthvalue(input, k, dim=None, keepdim=False, out=None) -> (Tensor, LongTensor)

Returns a namedtuple (values, indices) where values is the k th smallest element of each row of the input tensor in the given dimension dim. And indices is the index location of each element found.

If dim is not given, the last dimension of the input is chosen.

If keepdim is True, both the values and indices tensors are the same size as input, except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in both the values and indices tensors having 1 fewer dimension than the input tensor.

Parameters
  • input (Tensor) – the input tensor.

  • k (int) – k for the k-th smallest element

  • dim (int, optional) – the dimension to find the kth value along

  • keepdim (bool) – whether the output tensor has dim retained or not.

  • out (tuple, optional) – the output tuple of (Tensor, LongTensor) can be optionally given to be used as output buffers

Example:

>>> x = torch.arange(1., 6.)
>>> x
tensor([ 1.,  2.,  3.,  4.,  5.])
>>> torch.kthvalue(x, 4)
torch.return_types.kthvalue(values=tensor(4.), indices=tensor(3))

>>> x=torch.arange(1.,7.).resize_(2,3)
>>> x
tensor([[ 1.,  2.,  3.],
        [ 4.,  5.,  6.]])
>>> torch.kthvalue(x, 2, 0, True)
torch.return_types.kthvalue(values=tensor([[4., 5., 6.]]), indices=tensor([[1, 1, 1]]))
torch.le(input, other, out=None) → Tensor

Computes inputother\text{input} \leq \text{other} element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

Parameters
  • input (Tensor) – the tensor to compare

  • other (Tensor or float) – the tensor or value to compare

  • out (Tensor, optional) – the output tensor that must be a BoolTensor

Returns

A torch.BoolTensor containing a True at each location where comparison is true

Return type

Tensor

Example:

>>> torch.le(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[True, False], [True, True]])
torch.lt(input, other, out=None) → Tensor

Computes input<other\text{input} < \text{other} element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

Parameters
  • input (Tensor) – the tensor to compare

  • other (Tensor or float) – the tensor or value to compare

  • out (Tensor, optional) – the output tensor that must be a BoolTensor

Returns

A torch.BoolTensor containing a True at each location where comparison is true

Return type

Tensor

Example:

>>> torch.lt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[False, False], [True, False]])
torch.max()
torch.max(input) → Tensor

Returns the maximum value of all elements in the input tensor.

Parameters

{input}

Example:

>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.6763,  0.7445, -2.2369]])
>>> torch.max(a)
tensor(0.7445)
torch.max(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)

Returns a namedtuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax).

If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensors having 1 fewer dimension than input.

Parameters
  • {input}

  • {dim}

  • Default ({keepdim}) – False.

  • out (tuple, optional) – the result tuple of two output tensors (max, max_indices)

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[-1.2360, -0.2942, -0.1222,  0.8475],
        [ 1.1949, -1.1127, -2.2379, -0.6702],
        [ 1.5717, -0.9207,  0.1297, -1.8768],
        [-0.6172,  1.0036, -0.6060, -0.2432]])
>>> torch.max(a, 1)
torch.return_types.max(values=tensor([0.8475, 1.1949, 1.5717, 1.0036]), indices=tensor([3, 0, 0, 1]))
torch.max(input, other, out=None) → Tensor

Each element of the tensor input is compared with the corresponding element of the tensor other and an element-wise maximum is taken.

The shapes of input and other don’t need to match, but they must be broadcastable.

outi=max(tensori,otheri)\text{out}_i = \max(\text{tensor}_i, \text{other}_i)

Note

When the shapes do not match, the shape of the returned output tensor follows the broadcasting rules.

Parameters
  • input (Tensor) – the input tensor.

  • other (Tensor) – the second input tensor

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.2942, -0.7416,  0.2653, -0.1584])
>>> b = torch.randn(4)
>>> b
tensor([ 0.8722, -1.7421, -0.4141, -0.5055])
>>> torch.max(a, b)
tensor([ 0.8722, -0.7416,  0.2653, -0.1584])
torch.min()
torch.min(input) → Tensor

Returns the minimum value of all elements in the input tensor.

Parameters

{input}

Example:

>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.6750,  1.0857,  1.7197]])
>>> torch.min(a)
tensor(0.6750)
torch.min(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)

Returns a namedtuple (values, indices) where values is the minimum value of each row of the input tensor in the given dimension dim. And indices is the index location of each minimum value found (argmin).

If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensors having 1 fewer dimension than input.

Parameters
  • {input}

  • {dim}

  • {keepdim}

  • out (tuple, optional) – the tuple of two output tensors (min, min_indices)

Example:

>>> a = torch.randn(4, 4)
>>> a
tensor([[-0.6248,  1.1334, -1.1899, -0.2803],
        [-1.4644, -0.2635, -0.3651,  0.6134],
        [ 0.2457,  0.0384,  1.0128,  0.7015],
        [-0.1153,  2.9849,  2.1458,  0.5788]])
>>> torch.min(a, 1)
torch.return_types.min(values=tensor([-1.1899, -1.4644,  0.0384, -0.1153]), indices=tensor([2, 0, 1, 0]))
torch.min(input, other, out=None) → Tensor

Each element of the tensor input is compared with the corresponding element of the tensor other and an element-wise minimum is taken. The resulting tensor is returned.

The shapes of input and other don’t need to match, but they must be broadcastable.

outi=min(tensori,otheri)\text{out}_i = \min(\text{tensor}_i, \text{other}_i)

Note

When the shapes do not match, the shape of the returned output tensor follows the broadcasting rules.

Parameters
  • input (Tensor) – the input tensor.

  • other (Tensor) – the second input tensor

  • out (Tensor, optional) – the output tensor.

Example:

>>> a = torch.randn(4)
>>> a
tensor([ 0.8137, -1.1740, -0.6460,  0.6308])
>>> b = torch.randn(4)
>>> b
tensor([-0.1369,  0.1555,  0.4019, -0.1929])
>>> torch.min(a, b)
tensor([-0.1369, -1.1740, -0.6460, -0.1929])
torch.ne(input, other, out=None) → Tensor

Computes inputotherinput \neq other element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

Parameters
  • input (Tensor) – the tensor to compare

  • other (Tensor or float) – the tensor or value to compare

  • out (Tensor, optional) – the output tensor that must be a BoolTensor

Returns

A torch.BoolTensor containing a True at each location where comparison is true.

Return type

Tensor

Example:

>>> torch.ne(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[False, True], [True, False]])
torch.sort(input, dim=-1, descending=False, out=None) -> (Tensor, LongTensor)

Sorts the elements of the input tensor along a given dimension in ascending order by value.

If dim is not given, the last dimension of the input is chosen.

If descending is True then the elements are sorted in descending order by value.

A namedtuple of (values, indices) is returned, where the values are the sorted values and indices are the indices of the elements in the original input tensor.

Parameters
  • input (Tensor) – the input tensor.

  • dim (int, optional) – the dimension to sort along

  • descending (bool, optional) – controls the sorting order (ascending or descending)

  • out (tuple, optional) – the output tuple of (Tensor, LongTensor) that can be optionally given to be used as output buffers

Example:

>>> x = torch.randn(3, 4)
>>> sorted, indices = torch.sort(x)
>>> sorted
tensor([[-0.2162,  0.0608,  0.6719,  2.3332],
        [-0.5793,  0.0061,  0.6058,  0.9497],
        [-0.5071,  0.3343,  0.9553,  1.0960]])
>>> indices
tensor([[ 1,  0,  2,  3],
        [ 3,  1,  0,  2],
        [ 0,  3,  1,  2]])

>>> sorted, indices = torch.sort(x, 0)
>>> sorted
tensor([[-0.5071, -0.2162,  0.6719, -0.5793],
        [ 0.0608,  0.0061,  0.9497,  0.3343],
        [ 0.6058,  0.9553,  1.0960,  2.3332]])
>>> indices
tensor([[ 2,  0,  0,  1],
        [ 0,  1,  1,  2],
        [ 1,  2,  2,  0]])
torch.topk(input, k, dim=None, largest=True, sorted=True, out=None) -> (Tensor, LongTensor)

Returns the k largest elements of the given input tensor along a given dimension.

If dim is not given, the last dimension of the input is chosen.

If largest is False then the k smallest elements are returned.

A namedtuple of (values, indices) is returned, where the indices are the indices of the elements in the original input tensor.

The boolean option sorted if True, will make sure that the returned k elements are themselves sorted

Parameters
  • input (Tensor) – the input tensor.

  • k (int) – the k in “top-k”

  • dim (int, optional) – the dimension to sort along

  • largest (bool, optional) – controls whether to return largest or smallest elements

  • sorted (bool, optional) – controls whether to return the elements in sorted order

  • out (tuple, optional) – the output tuple of (Tensor, LongTensor) that can be optionally given to be used as output buffers

Example:

>>> x = torch.arange(1., 6.)
>>> x
tensor([ 1.,  2.,  3.,  4.,  5.])
>>> torch.topk(x, 3)
torch.return_types.topk(values=tensor([5., 4., 3.]), indices=tensor([4, 3, 2]))

Spectral Ops

torch.fft(input, signal_ndim, normalized=False) → Tensor

Complex-to-complex Discrete Fourier Transform

This method computes the complex-to-complex discrete Fourier transform. Ignoring the batch dimensions, it computes the following expression:

X[ω1,,ωd]=n1=0N11nd=0Nd1x[n1,,nd]ej 2πi=0dωiniNi,X[\omega_1, \dots, \omega_d] = \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] e^{-j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}},

where dd = signal_ndim is number of dimensions for the signal, and NiN_i is the size of signal dimension ii .

This method supports 1D, 2D and 3D complex-to-complex transforms, indicated by signal_ndim. input must be a tensor with last dimension of size 2, representing the real and imaginary components of complex numbers, and should have at least signal_ndim + 1 dimensions with optionally arbitrary number of leading batch dimensions. If normalized is set to True, this normalizes the result by dividing it with i=1KNi\sqrt{\prod_{i=1}^K N_i} so that the operator is unitary.

Returns the real and the imaginary parts together as one tensor of the same shape of input.

The inverse of this function is ifft().

Note

For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.

Warning

For CPU tensors, this method is currently only available with MKL. Use torch.backends.mkl.is_available() to check if MKL is installed.

Parameters
  • input (Tensor) – the input tensor of at least signal_ndim + 1 dimensions

  • signal_ndim (int) – the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

  • normalized (bool, optional) – controls whether to return normalized results. Default: False

Returns

A tensor containing the complex-to-complex Fourier transform result

Return type

Tensor

Example:

>>> # unbatched 2D FFT
>>> x = torch.randn(4, 3, 2)
>>> torch.fft(x, 2)
tensor([[[-0.0876,  1.7835],
         [-2.0399, -2.9754],
         [ 4.4773, -5.0119]],

        [[-1.5716,  2.7631],
         [-3.8846,  5.2652],
         [ 0.2046, -0.7088]],

        [[ 1.9938, -0.5901],
         [ 6.5637,  6.4556],
         [ 2.9865,  4.9318]],

        [[ 7.0193,  1.1742],
         [-1.3717, -2.1084],
         [ 2.0289,  2.9357]]])
>>> # batched 1D FFT
>>> torch.fft(x, 1)
tensor([[[ 1.8385,  1.2827],
         [-0.1831,  1.6593],
         [ 2.4243,  0.5367]],

        [[-0.9176, -1.5543],
         [-3.9943, -2.9860],
         [ 1.2838, -2.9420]],

        [[-0.8854, -0.6860],
         [ 2.4450,  0.0808],
         [ 1.3076, -0.5768]],

        [[-0.1231,  2.7411],
         [-0.3075, -1.7295],
         [-0.5384, -2.0299]]])
>>> # arbitrary number of batch dimensions, 2D FFT
>>> x = torch.randn(3, 3, 5, 5, 2)
>>> y = torch.fft(x, 2)
>>> y.shape
torch.Size([3, 3, 5, 5, 2])
torch.ifft(input, signal_ndim, normalized=False) → Tensor

Complex-to-complex Inverse Discrete Fourier Transform

This method computes the complex-to-complex inverse discrete Fourier transform. Ignoring the batch dimensions, it computes the following expression:

X[ω1,,ωd]=1i=1dNin1=0N11nd=0Nd1x[n1,,nd]e j 2πi=0dωiniNi,X[\omega_1, \dots, \omega_d] = \frac{1}{\prod_{i=1}^d N_i} \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] e^{\ j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}},

where dd = signal_ndim is number of dimensions for the signal, and NiN_i is the size of signal dimension ii .

The argument specifications are almost identical with fft(). However, if normalized is set to True, this instead returns the results multiplied by i=1dNi\sqrt{\prod_{i=1}^d N_i} , to become a unitary operator. Therefore, to invert a fft(), the normalized argument should be set identically for fft().

Returns the real and the imaginary parts together as one tensor of the same shape of input.

The inverse of this function is fft().

Note

For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.

Warning

For CPU tensors, this method is currently only available with MKL. Use torch.backends.mkl.is_available() to check if MKL is installed.

Parameters
  • input (Tensor) – the input tensor of at least signal_ndim + 1 dimensions

  • signal_ndim (int) – the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

  • normalized (bool, optional) – controls whether to return normalized results. Default: False

Returns

A tensor containing the complex-to-complex inverse Fourier transform result

Return type

Tensor

Example:

>>> x = torch.randn(3, 3, 2)
>>> x
tensor([[[ 1.2766,  1.3680],
         [-0.8337,  2.0251],
         [ 0.9465, -1.4390]],

        [[-0.1890,  1.6010],
         [ 1.1034, -1.9230],
         [-0.9482,  1.0775]],

        [[-0.7708, -0.8176],
         [-0.1843, -0.2287],
         [-1.9034, -0.2196]]])
>>> y = torch.fft(x, 2)
>>> torch.ifft(y, 2)  # recover x
tensor([[[ 1.2766,  1.3680],
         [-0.8337,  2.0251],
         [ 0.9465, -1.4390]],

        [[-0.1890,  1.6010],
         [ 1.1034, -1.9230],
         [-0.9482,  1.0775]],

        [[-0.7708, -0.8176],
         [-0.1843, -0.2287],
         [-1.9034, -0.2196]]])
torch.rfft(input, signal_ndim, normalized=False, onesided=True) → Tensor

Real-to-complex Discrete Fourier Transform

This method computes the real-to-complex discrete Fourier transform. It is mathematically equivalent with fft() with differences only in formats of the input and output.

This method supports 1D, 2D and 3D real-to-complex transforms, indicated by signal_ndim. input must be a tensor with at least signal_ndim dimensions with optionally arbitrary number of leading batch dimensions. If normalized is set to True, this normalizes the result by dividing it with i=1KNi\sqrt{\prod_{i=1}^K N_i} so that the operator is unitary, where NiN_i is the size of signal dimension ii .

The real-to-complex Fourier transform results follow conjugate symmetry:

X[ω1,,ωd]=X[N1ω1,,Ndωd],X[\omega_1, \dots, \omega_d] = X^*[N_1 - \omega_1, \dots, N_d - \omega_d],

where the index arithmetic is computed modulus the size of the corresponding dimension,  \ ^* is the conjugate operator, and dd = signal_ndim. onesided flag controls whether to avoid redundancy in the output results. If set to True (default), the output will not be full complex result of shape (,2)(*, 2) , where * is the shape of input, but instead the last dimension will be halfed as of size Nd2+1\lfloor \frac{N_d}{2} \rfloor + 1 .

The inverse of this function is irfft().

Note

For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.

Warning

For CPU tensors, this method is currently only available with MKL. Use torch.backends.mkl.is_available() to check if MKL is installed.

Parameters
  • input (Tensor) – the input tensor of at least signal_ndim dimensions

  • signal_ndim (int) – the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

  • normalized (bool, optional) – controls whether to return normalized results. Default: False

  • onesided (bool, optional) – controls whether to return half of results to avoid redundancy. Default: True

Returns

A tensor containing the real-to-complex Fourier transform result

Return type

Tensor

Example:

>>> x = torch.randn(5, 5)
>>> torch.rfft(x, 2).shape
torch.Size([5, 3, 2])
>>> torch.rfft(x, 2, onesided=False).shape
torch.Size([5, 5, 2])
torch.irfft(input, signal_ndim, normalized=False, onesided=True, signal_sizes=None) → Tensor

Complex-to-real Inverse Discrete Fourier Transform

This method computes the complex-to-real inverse discrete Fourier transform. It is mathematically equivalent with ifft() with differences only in formats of the input and output.

The argument specifications are almost identical with ifft(). Similar to ifft(), if normalized is set to True, this normalizes the result by multiplying it with i=1KNi\sqrt{\prod_{i=1}^K N_i} so that the operator is unitary, where NiN_i is the size of signal dimension ii .

Note

Due to the conjugate symmetry, input do not need to contain the full complex frequency values. Roughly half of the values will be sufficient, as is the case when input is given by rfft() with rfft(signal, onesided=True). In such case, set the onesided argument of this method to True. Moreover, the original signal shape information can sometimes be lost, optionally set signal_sizes to be the size of the original signal (without the batch dimensions if in batched mode) to recover it with correct shape.

Therefore, to invert an rfft(), the normalized and onesided arguments should be set identically for irfft(), and preferrably a signal_sizes is given to avoid size mismatch. See the example below for a case of size mismatch.

See rfft() for details on conjugate symmetry.

The inverse of this function is rfft().

Warning

Generally speaking, input to this function should contain values following conjugate symmetry. Note that even if onesided is True, often symmetry on some part is still needed. When this requirement is not satisfied, the behavior of irfft() is undefined. Since torch.autograd.gradcheck() estimates numerical Jacobian with point perturbations, irfft() will almost certainly fail the check.

Note

For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.

Warning

For CPU tensors, this method is currently only available with MKL. Use torch.backends.mkl.is_available() to check if MKL is installed.

Parameters
  • input (Tensor) – the input tensor of at least signal_ndim + 1 dimensions

  • signal_ndim (int) – the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

  • normalized (bool, optional) – controls whether to return normalized results. Default: False

  • onesided (bool, optional) – controls whether input was halfed to avoid redundancy, e.g., by rfft(). Default: True

  • signal_sizes (list or torch.Size, optional) – the size of the original signal (without batch dimension). Default: None

Returns

A tensor containing the complex-to-real inverse Fourier transform result

Return type

Tensor

Example:

>>> x = torch.randn(4, 4)
>>> torch.rfft(x, 2, onesided=True).shape
torch.Size([4, 3, 2])
>>>
>>> # notice that with onesided=True, output size does not determine the original signal size
>>> x = torch.randn(4, 5)

>>> torch.rfft(x, 2, onesided=True).shape
torch.Size([4, 3, 2])
>>>
>>> # now we use the original shape to recover x
>>> x
tensor([[-0.8992,  0.6117, -1.6091, -0.4155, -0.8346],
        [-2.1596, -0.0853,  0.7232,  0.1941, -0.0789],
        [-2.0329,  1.1031,  0.6869, -0.5042,  0.9895],
        [-0.1884,  0.2858, -1.5831,  0.9917, -0.8356]])
>>> y = torch.rfft(x, 2, onesided=True)
>>> torch.irfft(y, 2, onesided=True, signal_sizes=x.shape)  # recover x
tensor([[-0.8992,  0.6117, -1.6091, -0.4155, -0.8346],
        [-2.1596, -0.0853,  0.7232,  0.1941, -0.0789],
        [-2.0329,  1.1031,  0.6869, -0.5042,  0.9895],
        [-0.1884,  0.2858, -1.5831,  0.9917, -0.8356]])
torch.stft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=True)[source]

Short-time Fourier transform (STFT).

Ignoring the optional batch dimension, this method computes the following expression:

X[m,ω]=k=0win_length-1window[k] input[m×hop_length+k] exp(j2πωkwin_length),X[m, \omega] = \sum_{k = 0}^{\text{win\_length-1}}% \text{window}[k]\ \text{input}[m \times \text{hop\_length} + k]\ % \exp\left(- j \frac{2 \pi \cdot \omega k}{\text{win\_length}}\right),

where mm is the index of the sliding window, and ω\omega is the frequency that 0ω<n_fft0 \leq \omega < \text{n\_fft} . When onesided is the default value True,

  • input must be either a 1-D time sequence or a 2-D batch of time sequences.

  • If hop_length is None (default), it is treated as equal to floor(n_fft / 4).

  • If win_length is None (default), it is treated as equal to n_fft.

  • window can be a 1-D tensor of size win_length, e.g., from torch.hann_window(). If window is None (default), it is treated as if having 11 everywhere in the window. If win_length<n_fft\text{win\_length} < \text{n\_fft} , window will be padded on both sides to length n_fft before being applied.

  • If center is True (default), input will be padded on both sides so that the tt -th frame is centered at time t×hop_lengtht \times \text{hop\_length} . Otherwise, the tt -th frame begins at time t×hop_lengtht \times \text{hop\_length} .

  • pad_mode determines the padding method used on input when center is True. See torch.nn.functional.pad() for all available options. Default is "reflect".

  • If onesided is True (default), only values for ω\omega in [0,1,2,,n_fft2+1]\left[0, 1, 2, \dots, \left\lfloor \frac{\text{n\_fft}}{2} \right\rfloor + 1\right] are returned because the real-to-complex Fourier transform satisfies the conjugate symmetry, i.e., X[m,ω]=X[m,n_fftω]X[m, \omega] = X[m, \text{n\_fft} - \omega]^* .

  • If normalized is True (default is False), the function returns the normalized STFT results, i.e., multiplied by (frame_length)0.5(\text{frame\_length})^{-0.5} .

Returns the real and the imaginary parts together as one tensor of size (×N×T×2)(* \times N \times T \times 2) , where * is the optional batch size of input, NN is the number of frequencies where STFT is applied, TT is the total number of frames used, and each pair in the last dimension represents a complex number as the real part and the imaginary part.

Warning

This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result.

Parameters
  • input (Tensor) – the input tensor

  • n_fft (int) – size of Fourier transform

  • hop_length (int, optional) – the distance between neighboring sliding window frames. Default: None (treated as equal to floor(n_fft / 4))

  • win_length (int, optional) – the size of window frame and STFT filter. Default: None (treated as equal to n_fft)

  • window (Tensor, optional) – the optional window function. Default: None (treated as window of all 11 s)

  • center (bool, optional) – whether to pad input on both sides so that the tt -th frame is centered at time t×hop_lengtht \times \text{hop\_length} . Default: True

  • pad_mode (string, optional) – controls the padding method used when center is True. Default: "reflect"

  • normalized (bool, optional) – controls whether to return the normalized STFT results Default: False

  • onesided (bool, optional) – controls whether to return half of results to avoid redundancy Default: True

Returns

A tensor containing the STFT result with shape described above

Return type

Tensor

torch.bartlett_window(window_length, periodic=True, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Bartlett window function.

w[n]=12nN11={2nN1if 0nN1222nN1if N12<n<N,w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \begin{cases} \frac{2n}{N - 1} & \text{if } 0 \leq n \leq \frac{N - 1}{2} \\ 2 - \frac{2n}{N - 1} & \text{if } \frac{N - 1}{2} < n < N \\ \end{cases},

where NN is the full window size.

The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NN in above formula is in fact window_length+1\text{window\_length} + 1 . Also, we always have torch.bartlett_window(L, periodic=True) equal to torch.bartlett_window(L + 1, periodic=False)[:-1]).

Note

If window_length =1=1 , the returned window contains a single value 1.

Parameters
  • window_length (int) – the size of returned window

  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). Only floating point types are supported.

  • layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Returns

A 1-D tensor of size (window_length,)(\text{window\_length},) containing the window

Return type

Tensor

torch.blackman_window(window_length, periodic=True, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Blackman window function.

w[n]=0.420.5cos(2πnN1)+0.08cos(4πnN1)w[n] = 0.42 - 0.5 \cos \left( \frac{2 \pi n}{N - 1} \right) + 0.08 \cos \left( \frac{4 \pi n}{N - 1} \right)

where NN is the full window size.

The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NN in above formula is in fact window_length+1\text{window\_length} + 1 . Also, we always have torch.blackman_window(L, periodic=True) equal to torch.blackman_window(L + 1, periodic=False)[:-1]).

Note

If window_length =1=1 , the returned window contains a single value 1.

Parameters
  • window_length (int) – the size of returned window

  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). Only floating point types are supported.

  • layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Returns

A 1-D tensor of size (window_length,)(\text{window\_length},) containing the window

Return type

Tensor

torch.hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Hamming window function.

w[n]=αβ cos(2πnN1),w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1} \right),

where NN is the full window size.

The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NN in above formula is in fact window_length+1\text{window\_length} + 1 . Also, we always have torch.hamming_window(L, periodic=True) equal to torch.hamming_window(L + 1, periodic=False)[:-1]).

Note

If window_length =1=1 , the returned window contains a single value 1.

Note

This is a generalized version of torch.hann_window().

Parameters
  • window_length (int) – the size of returned window

  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window.

  • alpha (float, optional) – The coefficient α\alpha in the equation above

  • beta (float, optional) – The coefficient β\beta in the equation above

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). Only floating point types are supported.

  • layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Returns

A 1-D tensor of size (window_length,)(\text{window\_length},) containing the window

Return type

Tensor

torch.hann_window(window_length, periodic=True, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Hann window function.

w[n]=12 [1cos(2πnN1)]=sin2(πnN1),w[n] = \frac{1}{2}\ \left[1 - \cos \left( \frac{2 \pi n}{N - 1} \right)\right] = \sin^2 \left( \frac{\pi n}{N - 1} \right),

where NN is the full window size.

The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NN in above formula is in fact window_length+1\text{window\_length} + 1 . Also, we always have torch.hann_window(L, periodic=True) equal to torch.hann_window(L + 1, periodic=False)[:-1]).

Note

If window_length =1=1 , the returned window contains a single value 1.

Parameters
  • window_length (int) – the size of returned window

  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). Only floating point types are supported.

  • layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Returns

A 1-D tensor of size (window_length,)(\text{window\_length},) containing the window

Return type

Tensor

Other Operations

torch.bincount(input, weights=None, minlength=0) → Tensor

Count the frequency of each value in an array of non-negative ints.

The number of bins (size 1) is one larger than the largest value in input unless input is empty, in which case the result is a tensor of size 0. If minlength is specified, the number of bins is at least minlength and if input is empty, then the result is tensor of size minlength filled with zeros. If n is the value at position i, out[n] += weights[i] if weights is specified else out[n] += 1.

Note

When using the CUDA backend, this operation may induce nondeterministic behaviour that is not easily switched off. Please see the notes on Reproducibility for background.

Parameters
  • input (Tensor) – 1-d int tensor

  • weights (Tensor) – optional, weight for each value in the input tensor. Should be of same size as input tensor.

  • minlength (int) – optional, minimum number of bins. Should be non-negative.

Returns

a tensor of shape Size([max(input) + 1]) if input is non-empty, else Size(0)

Return type

output (Tensor)

Example:

>>> input = torch.randint(0, 8, (5,), dtype=torch.int64)
>>> weights = torch.linspace(0, 1, steps=5)
>>> input, weights
(tensor([4, 3, 6, 3, 4]),
 tensor([ 0.0000,  0.2500,  0.5000,  0.7500,  1.0000])

>>> torch.bincount(input)
tensor([0, 0, 0, 2, 2, 0, 1])

>>> input.bincount(weights)
tensor([0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 0.0000, 0.5000])
torch.broadcast_tensors(*tensors) → List of Tensors[source]

Broadcasts the given tensors according to Broadcasting semantics.

Parameters

*tensors – any number of tensors of the same type

Warning

More than one element of a broadcasted tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.

Example:

>>> x = torch.arange(3).view(1, 3)
>>> y = torch.arange(2).view(2, 1)
>>> a, b = torch.broadcast_tensors(x, y)
>>> a.size()
torch.Size([2, 3])
>>> a
tensor([[0, 1, 2],
        [0, 1, 2]])
torch.cartesian_prod(*tensors)[source]

Do cartesian product of the given sequence of tensors. The behavior is similar to python’s itertools.product.

Parameters

*tensors – any number of 1 dimensional tensors.

Returns

A tensor equivalent to converting all the input tensors into lists,

do itertools.product on these lists, and finally convert the resulting list into tensor.

Return type

Tensor

Example:

>>> a = [1, 2, 3]
>>> b =