PyTorch documentation¶
PyTorch is an optimized tensor library for deep learning using GPUs and CPUs.
Features described in this documentation are classified by release status:
Stable: These features will be maintained long-term and there should generally be no major performance limitations or gaps in documentation. We also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time).
Beta: These features are tagged as Beta because the API may change based on user feedback, because the performance needs to improve, or because coverage across operators is not yet complete. For Beta features, we are committing to seeing the feature through to the Stable classification. We are not, however, committing to backwards compatibility.
Prototype: These features are typically not available as part of binary distributions like PyPI or Conda, except sometimes behind run-time flags, and are at an early stage for feedback and testing.
- Automatic Mixed Precision examples
- Autograd mechanics
- Broadcasting semantics
- CPU threading and TorchScript inference
- CUDA semantics
- PyTorch Custom Operators Landing Page
- Distributed Data Parallel
- Extending PyTorch
- Extending torch.func with autograd.Function
- Frequently Asked Questions
- FSDP Notes
- Getting Started on Intel GPU
- Gradcheck mechanics
- HIP (ROCm) semantics
- Features for large-scale deployments
- Modules
- MPS backend
- Multiprocessing best practices
- Numerical accuracy
- Reproducibility
- Serialization semantics
- Windows FAQ
- torch
- torch.nn
- torch.nn.functional
- torch.Tensor
- Tensor Attributes
- Tensor Views
- torch.amp
- torch.autograd
- torch.library
- torch.accelerator
- torch.cpu
- torch.cuda
- Understanding CUDA Memory Usage
- Generating a Snapshot
- Using the visualizer
- Snapshot API Reference
- torch.mps
- torch.xpu
- torch.mtia
- torch.mtia.memory
- Meta device
- torch.backends
- torch.export
- torch.distributed
- torch.distributed.tensor
- torch.distributed.algorithms.join
- torch.distributed.elastic
- torch.distributed.fsdp
- torch.distributed.fsdp.fully_shard
- torch.distributed.tensor.parallel
- torch.distributed.optim
- torch.distributed.pipelining
- torch.distributed.checkpoint
- torch.distributions
- torch.compiler
- torch.fft
- torch.func
- torch.futures
- torch.fx
- torch.fx.experimental
- torch.hub
- torch.jit
- torch.linalg
- torch.monitor
- torch.signal
- torch.special
- torch.overrides
- torch.package
- torch.profiler
- torch.nn.init
- torch.nn.attention
- torch.onnx
- torch.optim
- Complex Numbers
- DDP Communication Hooks
- Quantization
- Distributed RPC Framework
- torch.random
- torch.masked
- torch.nested
- torch.Size
- torch.sparse
- torch.Storage
- torch.testing
- torch.utils
- torch.utils.benchmark
- torch.utils.bottleneck
- torch.utils.checkpoint
- torch.utils.cpp_extension
- torch.utils.data
- torch.utils.deterministic
- torch.utils.jit
- torch.utils.dlpack
- torch.utils.mobile_optimizer
- torch.utils.model_zoo
- torch.utils.tensorboard
- torch.utils.module_tracker
- Type Info
- Named Tensors
- Named Tensors operator coverage
- torch.__config__
- torch.__future__
- torch._logging
- Torch Environment Variables