# What is PyTorch?¶

It’s a Python based scientific computing package targeted at two sets of audiences:

• A replacement for NumPy to use the power of GPUs
• a deep learning research platform that provides maximum flexibility and speed

## Getting Started¶

### Tensors¶

Tensors are similar to NumPy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.

from __future__ import print_function
import torch


Construct a 5x3 matrix, uninitialized:

x = torch.Tensor(5, 3)
print(x)


Out:

0.0000e+00  0.0000e+00  5.1715e+36
4.5759e-41  5.1720e+36  4.5759e-41
1.6410e+38  4.5759e-41  1.6410e+38
4.5759e-41  2.0297e+38  4.5759e-41
1.9567e+38  4.5759e-41 -2.9502e-12
[torch.FloatTensor of size 5x3]


Construct a randomly initialized matrix:

x = torch.rand(5, 3)
print(x)


Out:

0.0678  0.3477  0.8519
0.1702  0.1063  0.1622
0.3628  0.9545  0.9967
0.3112  0.8519  0.2486
0.4186  0.6778  0.4960
[torch.FloatTensor of size 5x3]


Get its size:

print(x.size())


Out:

torch.Size([5, 3])


Note

torch.Size is in fact a tuple, so it supports all tuple operations.

### Operations¶

There are multiple syntaxes for operations. In the following example, we will take a look at the addition operation.

y = torch.rand(5, 3)
print(x + y)


Out:

0.0940  0.7706  1.5622
0.1752  0.6163  0.2284
0.9932  1.4596  1.6300
0.9993  1.3482  1.2210
0.6945  0.7995  0.7464
[torch.FloatTensor of size 5x3]


print(torch.add(x, y))


Out:

0.0940  0.7706  1.5622
0.1752  0.6163  0.2284
0.9932  1.4596  1.6300
0.9993  1.3482  1.2210
0.6945  0.7995  0.7464
[torch.FloatTensor of size 5x3]


Addition: providing an output tensor as argument

result = torch.Tensor(5, 3)
print(result)


Out:

0.0940  0.7706  1.5622
0.1752  0.6163  0.2284
0.9932  1.4596  1.6300
0.9993  1.3482  1.2210
0.6945  0.7995  0.7464
[torch.FloatTensor of size 5x3]


# adds x to y
print(y)


Out:

0.0940  0.7706  1.5622
0.1752  0.6163  0.2284
0.9932  1.4596  1.6300
0.9993  1.3482  1.2210
0.6945  0.7995  0.7464
[torch.FloatTensor of size 5x3]


Note

Any operation that mutates a tensor in-place is post-fixed with an _. For example: x.copy_(y), x.t_(), will change x.

You can use standard NumPy-like indexing with all bells and whistles!

print(x[:, 1])


Out:

0.3477
0.1063
0.9545
0.8519
0.6778
[torch.FloatTensor of size 5]


Resizing: If you want to resize/reshape tensor, you can use torch.view:

x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8)  # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())


Out:

torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])


100+ Tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc., are described here.

## NumPy Bridge¶

Converting a Torch Tensor to a NumPy array and vice versa is a breeze.

The Torch Tensor and NumPy array will share their underlying memory locations, and changing one will change the other.

### Converting a Torch Tensor to a NumPy Array¶

a = torch.ones(5)
print(a)


Out:

1
1
1
1
1
[torch.FloatTensor of size 5]

b = a.numpy()
print(b)


Out:

[1. 1. 1. 1. 1.]


See how the numpy array changed in value.

a.add_(1)
print(a)
print(b)


Out:

2
2
2
2
2
[torch.FloatTensor of size 5]

[2. 2. 2. 2. 2.]


### Converting NumPy Array to Torch Tensor¶

See how changing the np array changed the Torch Tensor automatically

import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
print(a)
print(b)


Out:

[2. 2. 2. 2. 2.]

2
2
2
2
2
[torch.DoubleTensor of size 5]


All the Tensors on the CPU except a CharTensor support converting to NumPy and back.

## CUDA Tensors¶

Tensors can be moved onto GPU using the .cuda method.

# let us run this cell only if CUDA is available
if torch.cuda.is_available():
x = x.cuda()
y = y.cuda()
x + y


Total running time of the script: ( 0 minutes 0.071 seconds)

Gallery generated by Sphinx-Gallery