torch.compiler is a namespace through which some of the internal compiler
methods are surfaced for user consumption. The main function and the feature in
this namespace is
torch.compile is a PyTorch function introduced in PyTorch 2.x that aims to
solve the problem of accurate graph capturing in PyTorch and ultimately enable
software engineers to run their PyTorch programs faster.
written in Python and it marks the transition of PyTorch from C++ to Python.
torch.compile leverages the following underlying technologies:
TorchDynamo (torch._dynamo) is an internal API that uses a CPython feature called the Frame Evaluation API to safely capture PyTorch graphs. Methods that are available externally for PyTorch users are surfaced through the
TorchInductor is the default
torch.compiledeep learning compiler that generates fast code for multiple accelerators and backends. You need to use a backend compiler to make speedups through
torch.compilepossible. For NVIDIA and AMD GPUs, it leverages OpenAI Triton as the key building block.
AOT Autograd captures not only the user-level code, but also backpropagation, which results in capturing the backwards pass “ahead-of-time”. This enables acceleration of both forwards and backwards pass using TorchInductor.
In some cases, the terms
might be used interchangeably in this documentation.
As mentioned above, to run your workflows faster,
TorchDynamo requires a backend that converts the captured graphs into a fast
machine code. Different backends can result in various optimization gains.
The default backend is called TorchInductor, also known as inductor,
TorchDynamo has a list of supported backends developed by our partners,
which can be see by running
torch.compiler.list_backends() each of which
with its optional dependencies.
Some of the most commonly used backends include:
Training & inference backends
Uses the TorchInductor backend. Read more
CUDA graphs with AOT Autograd. Read more
Uses IPEX on CPU. Read more
Uses ONNX Runtime for training on CPU/GPU. Read more
Uses Torch-TensorRT for inference optimizations. Requires
Uses IPEX for inference on CPU. Read more
Uses Apache TVM for inference optimizations. Read more
Uses OpenVINO for inference optimizations. Read more
- Getting Started
- torch.compiler API reference
- TorchDynamo APIs for fine-grained tracing
- AOTInductor: Ahead-Of-Time Compilation for Torch.Export-ed Models
- TorchInductor GPU Profiling
- Profiling to understand torch.compile performance
- Frequently Asked Questions
- PyTorch 2.0 Troubleshooting
- PyTorch 2.0 Performance Dashboard