Tensors¶

Tensors behave almost exactly the same way in PyTorch as they do in Torch.

Create a tensor of size (5 x 7) with uninitialized memory:

import torch
a = torch.FloatTensor(5, 7)


Initialize a tensor randomized with a normal distribution with mean=0, var=1:

a = torch.randn(5, 7)
print(a)
print(a.size())


Out:

-0.7666  1.9604  0.1676  0.8704 -0.0501 -0.0214 -0.6518
0.1123 -0.8741  0.3737 -1.6100 -0.0028  0.2978  1.6777
1.0477 -1.4170  2.2568 -0.5171 -1.2469 -0.3603  0.8921
0.9908 -2.1646 -0.4373 -0.0729  1.0616  1.7550 -1.1601
0.0308  0.9393  0.0918  0.6312  1.1011  0.0254 -0.4414
[torch.FloatTensor of size 5x7]

torch.Size([5, 7])


Note

torch.Size is in fact a tuple, so it supports the same operations

Inplace / Out-of-place¶

The first difference is that ALL operations on the tensor that operate in-place on it will have an _ postfix. For example, add is the out-of-place version, and add_ is the in-place version.

a.fill_(3.5)
# a has now been filled with the value 3.5

# a is still filled with 3.5
# new tensor b is returned with values 3.5 + 4.0 = 7.5

print(a, b)


Out:

3.5000  3.5000  3.5000  3.5000  3.5000  3.5000  3.5000
3.5000  3.5000  3.5000  3.5000  3.5000  3.5000  3.5000
3.5000  3.5000  3.5000  3.5000  3.5000  3.5000  3.5000
3.5000  3.5000  3.5000  3.5000  3.5000  3.5000  3.5000
3.5000  3.5000  3.5000  3.5000  3.5000  3.5000  3.5000
[torch.FloatTensor of size 5x7]

7.5000  7.5000  7.5000  7.5000  7.5000  7.5000  7.5000
7.5000  7.5000  7.5000  7.5000  7.5000  7.5000  7.5000
7.5000  7.5000  7.5000  7.5000  7.5000  7.5000  7.5000
7.5000  7.5000  7.5000  7.5000  7.5000  7.5000  7.5000
7.5000  7.5000  7.5000  7.5000  7.5000  7.5000  7.5000
[torch.FloatTensor of size 5x7]


Some operations like narrow do not have in-place versions, and hence, .narrow_ does not exist. Similarly, some operations like fill_ do not have an out-of-place version, so .fill does not exist.

Zero Indexing¶

Another difference is that Tensors are zero-indexed. (In lua, tensors are one-indexed)

b = a[0, 3]  # select 1st row, 4th column from a


Tensors can be also indexed with Python’s slicing

b = a[:, 3:5]  # selects all rows, 4th column and  5th column from a


No camel casing¶

The next small difference is that all functions are now NOT camelCase anymore. For example indexAdd is now called index_add_

x = torch.ones(5, 5)
print(x)


Out:

1  1  1  1  1
1  1  1  1  1
1  1  1  1  1
1  1  1  1  1
1  1  1  1  1
[torch.FloatTensor of size 5x5]

z = torch.Tensor(5, 2)
z[:, 0] = 10
z[:, 1] = 100
print(z)


Out:

10  100
10  100
10  100
10  100
10  100
[torch.FloatTensor of size 5x2]

x.index_add_(1, torch.LongTensor([4, 0]), z)
print(x)


Out:

101    1    1    1   11
101    1    1    1   11
101    1    1    1   11
101    1    1    1   11
101    1    1    1   11
[torch.FloatTensor of size 5x5]


Numpy Bridge¶

Converting a torch Tensor to a numpy array and vice versa is a breeze. The torch Tensor and numpy array will share their underlying memory locations, and changing one will change the other.

Converting torch Tensor to numpy Array¶

a = torch.ones(5)
print(a)


Out:

1
1
1
1
1
[torch.FloatTensor of size 5]

b = a.numpy()
print(b)


Out:

[ 1.  1.  1.  1.  1.]

a.add_(1)
print(a)
print(b)    # see how the numpy array changed in value


Out:

2
2
2
2
2
[torch.FloatTensor of size 5]

[ 2.  2.  2.  2.  2.]


Converting numpy Array to torch Tensor¶

import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
print(a)
print(b)  # see how changing the np array changed the torch Tensor automatically


Out:

[ 2.  2.  2.  2.  2.]

2
2
2
2
2
[torch.DoubleTensor of size 5]


All the Tensors on the CPU except a CharTensor support converting to NumPy and back.

CUDA Tensors¶

CUDA Tensors are nice and easy in pytorch, and transfering a CUDA tensor from the CPU to GPU will retain its underlying type.

# let us run this cell only if CUDA is available
if torch.cuda.is_available():
# creates a LongTensor and transfers it
# to GPU as torch.cuda.LongTensor
a = torch.LongTensor(10).fill_(3).cuda()
print(type(a))
b = a.cpu()
# transfers it to CPU, back to
# being a torch.LongTensor


Out:

<class 'torch.cuda.LongTensor'>


Total running time of the script: ( 0 minutes 0.002 seconds)

Gallery generated by Sphinx-Gallery