What is PyTorch?

It’s a Python based scientific computing package targeted at two sets of audiences:

  • A replacement for numpy to use the power of GPUs
  • a deep learning research platform that provides maximum flexibility and speed

Getting Started

Tensors

Tensors are similar to numpy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.

from __future__ import print_function
import torch

Construct a 5x3 matrix, uninitialized:

x = torch.Tensor(5, 3)
print(x)

Out:

-2.9226e-26  1.5549e-41  1.5885e+14
 0.0000e+00  7.0065e-45  0.0000e+00
 7.0065e-45  0.0000e+00  4.4842e-44
 0.0000e+00  4.6243e-44  0.0000e+00
 1.5810e+14  0.0000e+00  1.6196e+14
[torch.FloatTensor of size 5x3]

Construct a randomly initialized matrix

x = torch.rand(5, 3)
print(x)

Out:

0.8168  0.4588  0.8139
 0.7271  0.3067  0.2826
 0.1570  0.2931  0.3173
 0.8638  0.6364  0.6177
 0.2296  0.1411  0.1117
[torch.FloatTensor of size 5x3]

Get its size

print(x.size())

Out:

torch.Size([5, 3])

Note

torch.Size is in fact a tuple, so it supports the same operations

Operations

There are multiple syntaxes for operations. Let’s see addition as an example

Addition: syntax 1

y = torch.rand(5, 3)
print(x + y)

Out:

0.9616  0.8727  1.6763
 1.4781  0.7961  1.2082
 0.6717  0.9821  0.6129
 1.2544  1.0118  1.2720
 1.0912  0.3207  0.4200
[torch.FloatTensor of size 5x3]

Addition: syntax 2

print(torch.add(x, y))

Out:

0.9616  0.8727  1.6763
 1.4781  0.7961  1.2082
 0.6717  0.9821  0.6129
 1.2544  1.0118  1.2720
 1.0912  0.3207  0.4200
[torch.FloatTensor of size 5x3]

Addition: giving an output tensor

result = torch.Tensor(5, 3)
torch.add(x, y, out=result)
print(result)

Out:

0.9616  0.8727  1.6763
 1.4781  0.7961  1.2082
 0.6717  0.9821  0.6129
 1.2544  1.0118  1.2720
 1.0912  0.3207  0.4200
[torch.FloatTensor of size 5x3]

Addition: in-place

# adds x to y
y.add_(x)
print(y)

Out:

0.9616  0.8727  1.6763
 1.4781  0.7961  1.2082
 0.6717  0.9821  0.6129
 1.2544  1.0118  1.2720
 1.0912  0.3207  0.4200
[torch.FloatTensor of size 5x3]

Note

Any operation that mutates a tensor in-place is post-fixed with an _ For example: x.copy_(y), x.t_(), will change x.

You can use standard numpy-like indexing with all bells and whistles!

print(x[:, 1])

Out:

0.4588
 0.3067
 0.2931
 0.6364
 0.1411
[torch.FloatTensor of size 5]

Read later:

100+ Tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc are described here

Numpy Bridge

Converting a torch Tensor to a numpy array and vice versa is a breeze.

The torch Tensor and numpy array will share their underlying memory locations, and changing one will change the other.

Converting torch Tensor to numpy Array

a = torch.ones(5)
print(a)

Out:

1
 1
 1
 1
 1
[torch.FloatTensor of size 5]
b = a.numpy()
print(b)

Out:

[ 1.  1.  1.  1.  1.]

See how the numpy array changed in value.

a.add_(1)
print(a)
print(b)

Out:

2
 2
 2
 2
 2
[torch.FloatTensor of size 5]

[ 2.  2.  2.  2.  2.]

Converting numpy Array to torch Tensor

See how changing the np array changed the torch Tensor automatically

import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)

Out:

[ 2.  2.  2.  2.  2.]

 2
 2
 2
 2
 2
[torch.DoubleTensor of size 5]

All the Tensors on the CPU except a CharTensor support converting to NumPy and back.

CUDA Tensors

Tensors can be moved onto GPU using the .cuda function.

# let us run this cell only if CUDA is available
if torch.cuda.is_available():
    x = x.cuda()
    y = y.cuda()
    x + y

Total running time of the script: ( 0 minutes 0.005 seconds)

Generated by Sphinx-Gallery