:orphan: .. _gallery: Examples and tutorials ====================== .. raw:: html <div class="sphx-glr-thumbnails"> .. thumbnail-parent-div-open .. thumbnail-parent-div-close .. raw:: html </div> Transforms ---------- .. raw:: html <div class="sphx-glr-thumbnails"> .. thumbnail-parent-div-open .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="This example illustrates all of what you need to know to get started with the new torchvision.transforms.v2 API. We'll cover simple tasks like image classification, and more advanced ones like object detection / segmentation."> .. only:: html .. image:: /auto_examples/transforms/images/thumb/sphx_glr_plot_transforms_getting_started_thumb.png :alt: :ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py` .. raw:: html <div class="sphx-glr-thumbnail-title">Getting started with transforms v2</div> </div> .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="This example illustrates some of the various transforms available in the torchvision.transforms.v2 module <transforms>."> .. only:: html .. image:: /auto_examples/transforms/images/thumb/sphx_glr_plot_transforms_illustrations_thumb.png :alt: :ref:`sphx_glr_auto_examples_transforms_plot_transforms_illustrations.py` .. raw:: html <div class="sphx-glr-thumbnail-title">Illustration of transforms</div> </div> .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="Object detection and segmentation tasks are natively supported: torchvision.transforms.v2 enables jointly transforming images, videos, bounding boxes, and masks."> .. only:: html .. image:: /auto_examples/transforms/images/thumb/sphx_glr_plot_transforms_e2e_thumb.png :alt: :ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py` .. raw:: html <div class="sphx-glr-thumbnail-title">Transforms v2: End-to-end object detection/segmentation example</div> </div> .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="CutMix and MixUp are popular augmentation strategies that can improve classification accuracy."> .. only:: html .. image:: /auto_examples/transforms/images/thumb/sphx_glr_plot_cutmix_mixup_thumb.png :alt: :ref:`sphx_glr_auto_examples_transforms_plot_cutmix_mixup.py` .. raw:: html <div class="sphx-glr-thumbnail-title">How to use CutMix and MixUp</div> </div> .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="This guide explains how to write transforms that are compatible with the torchvision transforms V2 API."> .. only:: html .. image:: /auto_examples/transforms/images/thumb/sphx_glr_plot_custom_transforms_thumb.png :alt: :ref:`sphx_glr_auto_examples_transforms_plot_custom_transforms.py` .. raw:: html <div class="sphx-glr-thumbnail-title">How to write your own v2 transforms</div> </div> .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip=" TVTensors are Tensor subclasses introduced together with torchvision.transforms.v2. This example showcases what these TVTensors are and how they behave."> .. only:: html .. image:: /auto_examples/transforms/images/thumb/sphx_glr_plot_tv_tensors_thumb.png :alt: :ref:`sphx_glr_auto_examples_transforms_plot_tv_tensors.py` .. raw:: html <div class="sphx-glr-thumbnail-title">TVTensors FAQ</div> </div> .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="This guide is intended for advanced users and downstream library maintainers. We explain how to write your own TVTensor class, and how to make it compatible with the built-in Torchvision v2 transforms. Before continuing, make sure you have read sphx_glr_auto_examples_transforms_plot_tv_tensors.py."> .. only:: html .. image:: /auto_examples/transforms/images/thumb/sphx_glr_plot_custom_tv_tensors_thumb.png :alt: :ref:`sphx_glr_auto_examples_transforms_plot_custom_tv_tensors.py` .. raw:: html <div class="sphx-glr-thumbnail-title">How to write your own TVTensor class</div> </div> .. thumbnail-parent-div-close .. raw:: html </div> Others ------ .. raw:: html <div class="sphx-glr-thumbnails"> .. thumbnail-parent-div-open .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="Optical flow is the task of predicting movement between two images, usually two consecutive frames of a video. Optical flow models take two images as input, and predict a flow: the flow indicates the displacement of every single pixel in the first image, and maps it to its corresponding pixel in the second image. Flows are (2, H, W)-dimensional tensors, where the first axis corresponds to the predicted horizontal and vertical displacements."> .. only:: html .. image:: /auto_examples/others/images/thumb/sphx_glr_plot_optical_flow_thumb.png :alt: :ref:`sphx_glr_auto_examples_others_plot_optical_flow.py` .. raw:: html <div class="sphx-glr-thumbnail-title">Optical Flow: Predicting movement with the RAFT model</div> </div> .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="The following example illustrates the operations available the ops module for repurposing segmentation masks into object localization annotations for different tasks (e.g. transforming masks used by instance and panoptic segmentation methods into bounding boxes used by object detection methods)."> .. only:: html .. image:: /auto_examples/others/images/thumb/sphx_glr_plot_repurposing_annotations_thumb.png :alt: :ref:`sphx_glr_auto_examples_others_plot_repurposing_annotations.py` .. raw:: html <div class="sphx-glr-thumbnail-title">Repurposing masks into bounding boxes</div> </div> .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="This example illustrates torchscript support of the torchvision transforms on Tensor images."> .. only:: html .. image:: /auto_examples/others/images/thumb/sphx_glr_plot_scripted_tensor_transforms_thumb.png :alt: :ref:`sphx_glr_auto_examples_others_plot_scripted_tensor_transforms.py` .. raw:: html <div class="sphx-glr-thumbnail-title">Torchscript support</div> </div> .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="This example illustrates some of the APIs that torchvision offers for videos, together with the examples on how to build datasets and more."> .. only:: html .. image:: /auto_examples/others/images/thumb/sphx_glr_plot_video_api_thumb.png :alt: :ref:`sphx_glr_auto_examples_others_plot_video_api.py` .. raw:: html <div class="sphx-glr-thumbnail-title">Video API</div> </div> .. raw:: html <div class="sphx-glr-thumbcontainer" tooltip="This example illustrates some of the utilities that torchvision offers for visualizing images, bounding boxes, segmentation masks and keypoints."> .. only:: html .. image:: /auto_examples/others/images/thumb/sphx_glr_plot_visualization_utils_thumb.png :alt: :ref:`sphx_glr_auto_examples_others_plot_visualization_utils.py` .. raw:: html <div class="sphx-glr-thumbnail-title">Visualization utilities</div> </div> .. thumbnail-parent-div-close .. raw:: html </div> .. toctree:: :hidden: :includehidden: /auto_examples/transforms/index.rst /auto_examples/others/index.rst .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-gallery .. container:: sphx-glr-download sphx-glr-download-python :download:`Download all examples in Python source code: auto_examples_python.zip </auto_examples/auto_examples_python.zip>` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip </auto_examples/auto_examples_jupyter.zip>` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_