torch.masked¶
Introduction¶
Motivation¶
Warning
The PyTorch API of masked tensors is in the prototype stage and may or may not change in the future.
MaskedTensor serves as an extension to torch.Tensor
that provides the user with the ability to:
use any masked semantics (e.g. variable length tensors, nan* operators, etc.)
differentiate between 0 and NaN gradients
various sparse applications (see tutorial below)
“Specified” and “unspecified” have a long history in PyTorch without formal semantics and certainly without
consistency; indeed, MaskedTensor was born out of a build up of issues that the vanilla torch.Tensor
class could not properly address. Thus, a primary goal of MaskedTensor is to become the source of truth for
said “specified” and “unspecified” values in PyTorch where they are a first class citizen instead of an afterthought.
In turn, this should further unlock sparsity’s potential,
enable safer and more consistent operators, and provide a smoother and more intuitive experience
for users and developers alike.
What is a MaskedTensor?¶
A MaskedTensor is a tensor subclass that consists of 1) an input (data), and 2) a mask. The mask tells us which entries from the input should be included or ignored.
By way of example, suppose that we wanted to mask out all values that are equal to 0 (represented by the gray) and take the max:
On top is the vanilla tensor example while the bottom is MaskedTensor where all the 0’s are masked out. This clearly yields a different result depending on whether we have the mask, but this flexible structure allows the user to systematically ignore any elements they’d like during computation.
There are already a number of existing tutorials that we’ve written to help users onboard, such as:
Supported Operators¶
Unary Operators¶
Unary operators are operators that only contain only a single input. Applying them to MaskedTensors is relatively straightforward: if the data is masked out at a given index, we apply the operator, otherwise we’ll continue to mask out the data.
The available unary operators are:
Computes the absolute value of each element in 

Alias for 

Computes the inverse cosine of each element in 

Alias for 

Returns a new tensor with the inverse hyperbolic cosine of the elements of 

Alias for 

Computes the elementwise angle (in radians) of the given 

Returns a new tensor with the arcsine of the elements of 

Alias for 

Returns a new tensor with the inverse hyperbolic sine of the elements of 

Alias for 

Returns a new tensor with the arctangent of the elements of 

Alias for 

Returns a new tensor with the inverse hyperbolic tangent of the elements of 

Alias for 

Computes the bitwise NOT of the given input tensor. 

Returns a new tensor with the ceil of the elements of 

Alias for 

Computes the elementwise conjugate of the given 

Returns a new tensor with the cosine of the elements of 

Returns a new tensor with the hyperbolic cosine of the elements of 

Returns a new tensor with each of the elements of 

Alias for 

Alias for 

Alias for 

Alias for 

Returns a new tensor with the exponential of the elements of the input tensor 

Alias for 

Alias for 

Alias for 

Returns a new tensor with the floor of the elements of 

Computes the fractional portion of each element in 

Computes the natural logarithm of the absolute value of the gamma function on 

Returns a new tensor with the natural logarithm of the elements of 

Returns a new tensor with the logarithm to the base 10 of the elements of 

Returns a new tensor with the natural logarithm of (1 + 

Returns a new tensor with the logarithm to the base 2 of the elements of 

Alias for 

Alias for 

Returns a new tensor with boolean elements representing if each element of 

Replaces 

Returns a new tensor with the negative of the elements of 

Alias for 

Returns 

Takes the power of each element in 

Returns a new tensor with each of the elements of 

Returns a new tensor with the reciprocal of the elements of 

Rounds elements of 

Returns a new tensor with the reciprocal of the squareroot of each of the elements of 

Alias for 

Returns a new tensor with the signs of the elements of 

This function is an extension of torch.sign() to complex tensors. 

Tests if each element of 

Returns a new tensor with the sine of the elements of 

Alias for 

Returns a new tensor with the hyperbolic sine of the elements of 

Returns a new tensor with the squareroot of the elements of 

Returns a new tensor with the square of the elements of 

Returns a new tensor with the tangent of the elements of 

Returns a new tensor with the hyperbolic tangent of the elements of 

Returns a new tensor with the truncated integer values of the elements of 
The available inplace unary operators are all of the above except:
Computes the elementwise angle (in radians) of the given 

Returns 

Tests if each element of 

Returns a new tensor with boolean elements representing if each element of 
Binary Operators¶
As you may have seen in the tutorial, MaskedTensor
also has binary operations implemented with the caveat
that the masks in the two MaskedTensors must match or else an error will be raised. As noted in the error, if you
need support for a particular operator or have proposed semantics for how they should behave instead, please open
an issue on GitHub. For now, we have decided to go with the most conservative implementation to ensure that users
know exactly what is going on and are being intentional about their decisions with masked semantics.
The available binary operators are:
Adds 

Elementwise arctangent of $\text{input}_{i} / \text{other}_{i}$ with consideration of the quadrant. 

Alias for 

Computes the bitwise AND of 

Computes the bitwise OR of 

Computes the bitwise XOR of 

Computes the left arithmetic shift of 

Computes the right arithmetic shift of 

Divides each element of the input 

Alias for 

Applies C++'s std::fmod entrywise. 

Logarithm of the sum of exponentiations of the inputs. 

Logarithm of the sum of exponentiations of the inputs in base2. 

Multiplies 

Alias for 

Return the next floatingpoint value after 

Computes Python's modulus operation entrywise. 

Subtracts 

Alias for 

Alias for 

Computes elementwise equality 

Computes $\text{input} \neq \text{other}$ elementwise. 

Computes $\text{input} \leq \text{other}$ elementwise. 

Computes $\text{input} \geq \text{other}$ elementwise. 

Alias for 

Alias for 

Computes $\text{input} > \text{other}$ elementwise. 

Alias for 

Computes $\text{input} < \text{other}$ elementwise. 

Alias for 

Computes the elementwise maximum of 

Computes the elementwise minimum of 

Computes the elementwise maximum of 

Computes the elementwise minimum of 

Alias for 
The available inplace binary operators are all of the above except:
Logarithm of the sum of exponentiations of the inputs. 

Logarithm of the sum of exponentiations of the inputs in base2. 



Computes the elementwise minimum of 

Computes the elementwise minimum of 

Computes the elementwise maximum of 
Reductions¶
The following reductions are available (with autograd support). For more information, the Overview tutorial details some examples of reductions, while the Advanced semantics tutorial has some further indepth discussions about how we decided on certain reduction semantics.
Returns the sum of all elements in the 

Returns the mean value of all elements in the 

Returns the minimum value of each slice of the 

Returns the maximum value of each slice of the 

Returns the indices of the minimum value(s) of the flattened tensor or along a dimension 

Returns the indices of the maximum value of all elements in the 

Returns the product of all elements in the 

Tests if all elements in 

Returns the matrix norm or vector norm of a given tensor. 

Calculates the variance over the dimensions specified by 

Calculates the standard deviation over the dimensions specified by 
View and select functions¶
We’ve included a number of view and select functions as well; intuitively, these operators will apply to
both the data and the mask and then wrap the result in a MaskedTensor
. For a quick example,
consider select()
:
>>> data = torch.arange(12, dtype=torch.float).reshape(3, 4)
>>> data
tensor([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]])
>>> mask = torch.tensor([[True, False, False, True], [False, True, False, False], [True, True, True, True]])
>>> mt = masked_tensor(data, mask)
>>> data.select(0, 1)
tensor([4., 5., 6., 7.])
>>> mask.select(0, 1)
tensor([False, True, False, False])
>>> mt.select(0, 1)
MaskedTensor(
[ , 5.0000, , ]
)
The following ops are currently supported:
Returns a 1dimensional view of each input tensor with zero dimensions. 

Broadcasts the given tensors according to Broadcasting semantics. 

Broadcasts 

Concatenates the given sequence of tensors in 

Attempts to split a tensor into the specified number of chunks. 

Creates a new tensor by horizontally stacking the tensors in 

Splits 

Flattens 

Splits 

Stack tensors in sequence horizontally (column wise). 

Computes the Kronecker product, denoted by $\otimes$, of 

Creates grids of coordinates specified by the 1D inputs in attr:tensors. 

Returns a new tensor that is a narrowed version of 

Extract sliding local blocks from a batched input tensor. 

Return a contiguous flattened tensor. 

Slices the 

Splits the tensor into chunks. 

Concatenates a sequence of tensors along a new dimension. 

Expects 

Returns a tensor that is a transposed version of 

Splits 

Stack tensors in sequence vertically (row wise). 

Returns a new view of the 

Expand this tensor to the same size as 

Returns a tensor with the same data and number of elements as 

Returns this tensor as the same shape as 

Returns a view of the original tensor which contains all slices of size 

Returns a new tensor with the same data as the 