.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "beginner/basics/tensorqs_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_beginner_basics_tensorqs_tutorial.py: `Learn the Basics `_ || `Quickstart `_ || **Tensors** || `Datasets & DataLoaders `_ || `Transforms `_ || `Build Model `_ || `Autograd `_ || `Optimization `_ || `Save & Load Model `_ Tensors ========================== Tensors are a specialized data structure that are very similar to arrays and matrices. In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters. Tensors are similar to `NumPy’s `_ ndarrays, except that tensors can run on GPUs or other hardware accelerators. In fact, tensors and NumPy arrays can often share the same underlying memory, eliminating the need to copy data (see :ref:`bridge-to-np-label`). Tensors are also optimized for automatic differentiation (we'll see more about that later in the `Autograd `__ section). If you’re familiar with ndarrays, you’ll be right at home with the Tensor API. If not, follow along! .. GENERATED FROM PYTHON SOURCE LINES 23-28 .. code-block:: default import torch import numpy as np .. GENERATED FROM PYTHON SOURCE LINES 29-37 Initializing a Tensor ~~~~~~~~~~~~~~~~~~~~~ Tensors can be initialized in various ways. Take a look at the following examples: **Directly from data** Tensors can be created directly from data. The data type is automatically inferred. .. GENERATED FROM PYTHON SOURCE LINES 37-41 .. code-block:: default data = [[1, 2],[3, 4]] x_data = torch.tensor(data) .. GENERATED FROM PYTHON SOURCE LINES 42-45 **From a NumPy array** Tensors can be created from NumPy arrays (and vice versa - see :ref:`bridge-to-np-label`). .. GENERATED FROM PYTHON SOURCE LINES 45-49 .. code-block:: default np_array = np.array(data) x_np = torch.from_numpy(np_array) .. GENERATED FROM PYTHON SOURCE LINES 50-53 **From another tensor:** The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden. .. GENERATED FROM PYTHON SOURCE LINES 53-61 .. code-block:: default x_ones = torch.ones_like(x_data) # retains the properties of x_data print(f"Ones Tensor: \n {x_ones} \n") x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data print(f"Random Tensor: \n {x_rand} \n") .. rst-class:: sphx-glr-script-out .. code-block:: none Ones Tensor: tensor([[1, 1], [1, 1]]) Random Tensor: tensor([[0.8823, 0.9150], [0.3829, 0.9593]]) .. GENERATED FROM PYTHON SOURCE LINES 62-65 **With random or constant values:** ``shape`` is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor. .. GENERATED FROM PYTHON SOURCE LINES 65-77 .. code-block:: default shape = (2,3,) rand_tensor = torch.rand(shape) ones_tensor = torch.ones(shape) zeros_tensor = torch.zeros(shape) print(f"Random Tensor: \n {rand_tensor} \n") print(f"Ones Tensor: \n {ones_tensor} \n") print(f"Zeros Tensor: \n {zeros_tensor}") .. rst-class:: sphx-glr-script-out .. code-block:: none Random Tensor: tensor([[0.3904, 0.6009, 0.2566], [0.7936, 0.9408, 0.1332]]) Ones Tensor: tensor([[1., 1., 1.], [1., 1., 1.]]) Zeros Tensor: tensor([[0., 0., 0.], [0., 0., 0.]]) .. GENERATED FROM PYTHON SOURCE LINES 78-80 -------------- .. GENERATED FROM PYTHON SOURCE LINES 82-86 Attributes of a Tensor ~~~~~~~~~~~~~~~~~~~~~~ Tensor attributes describe their shape, datatype, and the device on which they are stored. .. GENERATED FROM PYTHON SOURCE LINES 86-94 .. code-block:: default tensor = torch.rand(3,4) print(f"Shape of tensor: {tensor.shape}") print(f"Datatype of tensor: {tensor.dtype}") print(f"Device tensor is stored on: {tensor.device}") .. rst-class:: sphx-glr-script-out .. code-block:: none Shape of tensor: torch.Size([3, 4]) Datatype of tensor: torch.float32 Device tensor is stored on: cpu .. GENERATED FROM PYTHON SOURCE LINES 95-97 -------------- .. GENERATED FROM PYTHON SOURCE LINES 99-112 Operations on Tensors ~~~~~~~~~~~~~~~~~~~~~~~ Over 100 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing, indexing, slicing), sampling and more are comprehensively described `here `__. Each of these operations can be run on the GPU (at typically higher speeds than on a CPU). If you’re using Colab, allocate a GPU by going to Runtime > Change runtime type > GPU. By default, tensors are created on the CPU. We need to explicitly move tensors to the GPU using ``.to`` method (after checking for GPU availability). Keep in mind that copying large tensors across devices can be expensive in terms of time and memory! .. GENERATED FROM PYTHON SOURCE LINES 112-118 .. code-block:: default # We move our tensor to the GPU if available if torch.cuda.is_available(): tensor = tensor.to("cuda") .. GENERATED FROM PYTHON SOURCE LINES 119-122 Try out some of the operations from the list. If you're familiar with the NumPy API, you'll find the Tensor API a breeze to use. .. GENERATED FROM PYTHON SOURCE LINES 124-125 **Standard numpy-like indexing and slicing:** .. GENERATED FROM PYTHON SOURCE LINES 125-133 .. code-block:: default tensor = torch.ones(4, 4) print(f"First row: {tensor[0]}") print(f"First column: {tensor[:, 0]}") print(f"Last column: {tensor[..., -1]}") tensor[:,1] = 0 print(tensor) .. rst-class:: sphx-glr-script-out .. code-block:: none First row: tensor([1., 1., 1., 1.]) First column: tensor([1., 1., 1., 1.]) Last column: tensor([1., 1., 1., 1.]) tensor([[1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.]]) .. GENERATED FROM PYTHON SOURCE LINES 134-137 **Joining tensors** You can use ``torch.cat`` to concatenate a sequence of tensors along a given dimension. See also `torch.stack `__, another tensor joining operator that is subtly different from ``torch.cat``. .. GENERATED FROM PYTHON SOURCE LINES 137-141 .. code-block:: default t1 = torch.cat([tensor, tensor, tensor], dim=1) print(t1) .. rst-class:: sphx-glr-script-out .. code-block:: none tensor([[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.], [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.], [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.], [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.]]) .. GENERATED FROM PYTHON SOURCE LINES 142-143 **Arithmetic operations** .. GENERATED FROM PYTHON SOURCE LINES 143-161 .. code-block:: default # This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value # ``tensor.T`` returns the transpose of a tensor y1 = tensor @ tensor.T y2 = tensor.matmul(tensor.T) y3 = torch.rand_like(y1) torch.matmul(tensor, tensor.T, out=y3) # This computes the element-wise product. z1, z2, z3 will have the same value z1 = tensor * tensor z2 = tensor.mul(tensor) z3 = torch.rand_like(tensor) torch.mul(tensor, tensor, out=z3) .. rst-class:: sphx-glr-script-out .. code-block:: none tensor([[1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.]]) .. GENERATED FROM PYTHON SOURCE LINES 162-165 **Single-element tensors** If you have a one-element tensor, for example by aggregating all values of a tensor into one value, you can convert it to a Python numerical value using ``item()``: .. GENERATED FROM PYTHON SOURCE LINES 165-171 .. code-block:: default agg = tensor.sum() agg_item = agg.item() print(agg_item, type(agg_item)) .. rst-class:: sphx-glr-script-out .. code-block:: none 12.0 .. GENERATED FROM PYTHON SOURCE LINES 172-175 **In-place operations** Operations that store the result into the operand are called in-place. They are denoted by a ``_`` suffix. For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``. .. GENERATED FROM PYTHON SOURCE LINES 175-180 .. code-block:: default print(f"{tensor} \n") tensor.add_(5) print(tensor) .. rst-class:: sphx-glr-script-out .. code-block:: none tensor([[1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.], [1., 0., 1., 1.]]) tensor([[6., 5., 6., 6.], [6., 5., 6., 6.], [6., 5., 6., 6.], [6., 5., 6., 6.]]) .. GENERATED FROM PYTHON SOURCE LINES 181-184 .. note:: In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss of history. Hence, their use is discouraged. .. GENERATED FROM PYTHON SOURCE LINES 188-190 -------------- .. GENERATED FROM PYTHON SOURCE LINES 193-199 .. _bridge-to-np-label: Bridge with NumPy ~~~~~~~~~~~~~~~~~ Tensors on the CPU and NumPy arrays can share their underlying memory locations, and changing one will change the other. .. GENERATED FROM PYTHON SOURCE LINES 202-204 Tensor to NumPy array ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 204-209 .. code-block:: default t = torch.ones(5) print(f"t: {t}") n = t.numpy() print(f"n: {n}") .. rst-class:: sphx-glr-script-out .. code-block:: none t: tensor([1., 1., 1., 1., 1.]) n: [1. 1. 1. 1. 1.] .. GENERATED FROM PYTHON SOURCE LINES 210-211 A change in the tensor reflects in the NumPy array. .. GENERATED FROM PYTHON SOURCE LINES 211-217 .. code-block:: default t.add_(1) print(f"t: {t}") print(f"n: {n}") .. rst-class:: sphx-glr-script-out .. code-block:: none t: tensor([2., 2., 2., 2., 2.]) n: [2. 2. 2. 2. 2.] .. GENERATED FROM PYTHON SOURCE LINES 218-220 NumPy array to Tensor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 220-223 .. code-block:: default n = np.ones(5) t = torch.from_numpy(n) .. GENERATED FROM PYTHON SOURCE LINES 224-225 Changes in the NumPy array reflects in the tensor. .. GENERATED FROM PYTHON SOURCE LINES 225-228 .. code-block:: default np.add(n, 1, out=n) print(f"t: {t}") print(f"n: {n}") .. rst-class:: sphx-glr-script-out .. code-block:: none t: tensor([2., 2., 2., 2., 2.], dtype=torch.float64) n: [2. 2. 2. 2. 2.] .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 0.023 seconds) .. _sphx_glr_download_beginner_basics_tensorqs_tutorial.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: tensorqs_tutorial.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: tensorqs_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_