.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "beginner/former_torchies/tensor_tutorial_old.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_beginner_former_torchies_tensor_tutorial_old.py: Tensors ======= Tensors behave almost exactly the same way in PyTorch as they do in Torch. Create a tensor of size (5 x 7) with uninitialized memory: .. GENERATED FROM PYTHON SOURCE LINES 11-15 .. code-block:: default import torch a = torch.empty(5, 7, dtype=torch.float) .. GENERATED FROM PYTHON SOURCE LINES 16-18 Initialize a double tensor randomized with a normal distribution with mean=0, var=1: .. GENERATED FROM PYTHON SOURCE LINES 18-23 .. code-block:: default a = torch.randn(5, 7, dtype=torch.double) print(a) print(a.size()) .. GENERATED FROM PYTHON SOURCE LINES 24-33 .. note:: ``torch.Size`` is in fact a tuple, so it supports the same operations Inplace / Out-of-place ---------------------- The first difference is that ALL operations on the tensor that operate in-place on it will have an ``_`` postfix. For example, ``add`` is the out-of-place version, and ``add_`` is the in-place version. .. GENERATED FROM PYTHON SOURCE LINES 33-43 .. code-block:: default a.fill_(3.5) # a has now been filled with the value 3.5 b = a.add(4.0) # a is still filled with 3.5 # new tensor b is returned with values 3.5 + 4.0 = 7.5 print(a, b) .. GENERATED FROM PYTHON SOURCE LINES 44-54 Some operations like ``narrow`` do not have in-place versions, and hence, ``.narrow_`` does not exist. Similarly, some operations like ``fill_`` do not have an out-of-place version, so ``.fill`` does not exist. Zero Indexing ------------- Another difference is that Tensors are zero-indexed. (In lua, tensors are one-indexed) .. GENERATED FROM PYTHON SOURCE LINES 54-57 .. code-block:: default b = a[0, 3] # select 1st row, 4th column from a .. GENERATED FROM PYTHON SOURCE LINES 58-59 Tensors can be also indexed with Python's slicing .. GENERATED FROM PYTHON SOURCE LINES 59-62 .. code-block:: default b = a[:, 3:5] # selects all rows, 4th column and 5th column from a .. GENERATED FROM PYTHON SOURCE LINES 63-68 No camel casing --------------- The next small difference is that all functions are now NOT camelCase anymore. For example ``indexAdd`` is now called ``index_add_`` .. GENERATED FROM PYTHON SOURCE LINES 68-73 .. code-block:: default x = torch.ones(5, 5) print(x) .. GENERATED FROM PYTHON SOURCE LINES 75-81 .. code-block:: default z = torch.empty(5, 2) z[:, 0] = 10 z[:, 1] = 100 print(z) .. GENERATED FROM PYTHON SOURCE LINES 83-86 .. code-block:: default x.index_add_(1, torch.tensor([4, 0], dtype=torch.long), z) print(x) .. GENERATED FROM PYTHON SOURCE LINES 87-96 Numpy Bridge ------------ Converting a torch Tensor to a numpy array and vice versa is a breeze. The torch Tensor and numpy array will share their underlying memory locations, and changing one will change the other. Converting torch Tensor to numpy Array ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 96-100 .. code-block:: default a = torch.ones(5) print(a) .. GENERATED FROM PYTHON SOURCE LINES 102-106 .. code-block:: default b = a.numpy() print(b) .. GENERATED FROM PYTHON SOURCE LINES 108-113 .. code-block:: default a.add_(1) print(a) print(b) # see how the numpy array changed in value .. GENERATED FROM PYTHON SOURCE LINES 114-116 Converting numpy Array to torch Tensor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 116-124 .. code-block:: default import numpy as np a = np.ones(5) b = torch.from_numpy(a) np.add(a, 1, out=a) print(a) print(b) # see how changing the np array changed the torch Tensor automatically .. GENERATED FROM PYTHON SOURCE LINES 125-133 All the Tensors on the CPU except a CharTensor support converting to NumPy and back. CUDA Tensors ------------ CUDA Tensors are nice and easy in pytorch, and transfering a CUDA tensor from the CPU to GPU will retain its underlying type. .. GENERATED FROM PYTHON SOURCE LINES 133-144 .. code-block:: default # let us run this cell only if CUDA is available if torch.cuda.is_available(): # creates a LongTensor and transfers it # to GPU as torch.cuda.LongTensor a = torch.full((10,), 3, device=torch.device("cuda")) print(type(a)) b = a.to(torch.device("cpu")) # transfers it to CPU, back to # being a torch.LongTensor .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 0.000 seconds) .. _sphx_glr_download_beginner_former_torchies_tensor_tutorial_old.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: tensor_tutorial_old.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: tensor_tutorial_old.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_