\n \n

\n\nTensors are the central data abstraction in PyTorch. This interactive\nnotebook provides an in-depth introduction to the ``torch.Tensor``\nclass.\n\nFirst things first, let\u2019s import the PyTorch module. We\u2019ll also add\nPython\u2019s math module to facilitate some of the examples.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import torch\nimport math"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating Tensors\n\nThe simplest way to create a tensor is with the ``torch.empty()`` call:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"x = torch.empty(3, 4)\nprint(type(x))\nprint(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let\u2019s unpack what we just did:\n\n- We created a tensor using one of the numerous factory methods\n attached to the ``torch`` module.\n- The tensor itself is 2-dimensional, having 3 rows and 4 columns.\n- The type of the object returned is ``torch.Tensor``, which is an\n alias for ``torch.FloatTensor``; by default, PyTorch tensors are\n populated with 32-bit floating point numbers. (More on data types\n below.)\n- You will probably see some random-looking values when printing your\n tensor. The ``torch.empty()`` call allocates memory for the tensor,\n but does not initialize it with any values - so what you\u2019re seeing is\n whatever was in memory at the time of allocation.\n\nA brief note about tensors and their number of dimensions, and\nterminology:\n\n- You will sometimes see a 1-dimensional tensor called a\n *vector.* \n- Likewise, a 2-dimensional tensor is often referred to as a\n *matrix.* \n- Anything with more than two dimensions is generally just\n called a tensor.\n\nMore often than not, you\u2019ll want to initialize your tensor with some\nvalue. Common cases are all zeros, all ones, or random values, and the\n``torch`` module provides factory methods for all of these:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"zeros = torch.zeros(2, 3)\nprint(zeros)\n\nones = torch.ones(2, 3)\nprint(ones)\n\ntorch.manual_seed(1729)\nrandom = torch.rand(2, 3)\nprint(random)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The factory methods all do just what you\u2019d expect - we have a tensor\nfull of zeros, another full of ones, and another with random values\nbetween 0 and 1.\n\n### Random Tensors and Seeding\n\nSpeaking of the random tensor, did you notice the call to\n``torch.manual_seed()`` immediately preceding it? Initializing tensors,\nsuch as a model\u2019s learning weights, with random values is common but\nthere are times - especially in research settings - where you\u2019ll want\nsome assurance of the reproducibility of your results. Manually setting\nyour random number generator\u2019s seed is the way to do this. Let\u2019s look\nmore closely:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"torch.manual_seed(1729)\nrandom1 = torch.rand(2, 3)\nprint(random1)\n\nrandom2 = torch.rand(2, 3)\nprint(random2)\n\ntorch.manual_seed(1729)\nrandom3 = torch.rand(2, 3)\nprint(random3)\n\nrandom4 = torch.rand(2, 3)\nprint(random4)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"What you should see above is that ``random1`` and ``random3`` carry\nidentical values, as do ``random2`` and ``random4``. Manually setting\nthe RNG\u2019s seed resets it, so that identical computations depending on\nrandom number should, in most settings, provide identical results.\n\nFor more information, see the [PyTorch documentation on\nreproducibility](https://pytorch.org/docs/stable/notes/randomness.html)_.\n\n### Tensor Shapes\n\nOften, when you\u2019re performing operations on two or more tensors, they\nwill need to be of the same *shape* - that is, having the same number of\ndimensions and the same number of cells in each dimension. For that, we\nhave the ``torch.*_like()`` methods:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"x = torch.empty(2, 2, 3)\nprint(x.shape)\nprint(x)\n\nempty_like_x = torch.empty_like(x)\nprint(empty_like_x.shape)\nprint(empty_like_x)\n\nzeros_like_x = torch.zeros_like(x)\nprint(zeros_like_x.shape)\nprint(zeros_like_x)\n\nones_like_x = torch.ones_like(x)\nprint(ones_like_x.shape)\nprint(ones_like_x)\n\nrand_like_x = torch.rand_like(x)\nprint(rand_like_x.shape)\nprint(rand_like_x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The first new thing in the code cell above is the use of the ``.shape``\nproperty on a tensor. This property contains a list of the extent of\neach dimension of a tensor - in our case, ``x`` is a three-dimensional\ntensor with shape 2 x 2 x 3.\n\nBelow that, we call the ``.empty_like()``, ``.zeros_like()``,\n``.ones_like()``, and ``.rand_like()`` methods. Using the ``.shape``\nproperty, we can verify that each of these methods returns a tensor of\nidentical dimensionality and extent.\n\nThe last way to create a tensor that will cover is to specify its data\ndirectly from a PyTorch collection:\n\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"some_constants = torch.tensor([[3.1415926, 2.71828], [1.61803, 0.0072897]])\nprint(some_constants)\n\nsome_integers = torch.tensor((2, 3, 5, 7, 11, 13, 17, 19))\nprint(some_integers)\n\nmore_integers = torch.tensor(((2, 4, 6), [3, 6, 9]))\nprint(more_integers)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using ``torch.tensor()`` is the most straightforward way to create a\ntensor if you already have data in a Python tuple or list. As shown\nabove, nesting the collections will result in a multi-dimensional\ntensor.\n\n``torch.tensor()`` creates a copy of the data.

The following cell throws a run-time error. This is intentional.

If you are familiar with broadcasting semantics in NumPy\n ndarrays, you\u2019ll find the same rules apply here.

The following cell throws a run-time error. This is intentional.

If you do not have a CUDA-compatible GPU and CUDA drivers\n installed, the executable cells in this section will not execute any\n GPU-related code.

The ``(6 * 20 * 20,)`` argument in the final line of the cell\n above is because PyTorch expects a **tuple** when specifying a\n tensor shape - but when the shape is the first argument of a method, it\n lets us cheat and just use a series of integers. Here, we had to add the\n parentheses and comma to convince the method that this is really a\n one-element tuple.