We are excited to announce the release of PyTorch® 2.4 (release note)! PyTorch 2.4 adds support for the latest version of Python (3.12) for torch.compile
. AOTInductor freezing gives developers running AOTInductor more performance-based optimizations by allowing the serialization of MKLDNN weights. As well, a new default TCPStore server backend utilizing libuv
has been introduced which should significantly reduce initialization times for users running large-scale jobs. Finally, a new Python Custom Operator API makes it easier than before to integrate custom kernels into PyTorch, especially for torch.compile
.
This release is composed of 3661 commits and 475 contributors since PyTorch 2.3. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.4. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page.
Beta | Prototype | Performance Improvements |
Python 3.12 support for torch.compile | FSDP2: DTensor-based per-parameter-sharding FSDP | torch.compile optimizations for AWS Graviton (aarch64-linux) processors |
AOTInductor Freezing for CPU | torch.distributed.pipelining, simplified pipeline parallelism | BF16 symbolic shape optimization in TorchInductor |
New Higher-level Python Custom Operator API | Intel GPU is available through source build | Performance optimizations for GenAI projects utilizing CPU devices |
Switching TCPStore’s default server backend to libuv |
*To see a full list of public feature submissions click here.
Beta Features
[Beta] Python 3.12 support for torch.compile
torch.compile()
previously only supported Python 3.8-3.11. Users can now optimize models with torch.compile()
with Python 3.12.
[Beta] AOTInductor Freezing for CPU
This feature enables users to turn on the freezing flag when using AOTInductor on CPU. With this feature, AOTInductor can cover the same set of op scenarios and reach on-par performance as Inductor CPP backend. Before this support, when models contain MKLDNN operators (when computation-intensive operators are involved, such as Convolution, Linear, ConvTranspose, and so on) and freezing is on, those models will fail to run since AOTInductor didn’t support serializing the MKLDNN weights which have an opaque format.
The workflow is as explained in the AOTInductor tutorial, in addition to that users could now add the freezing flag to get better performance:
export TORCHINDUCTOR_FREEZING=1
[Beta] New Higher-level Python Custom Operator API
We’ve added a new higher-level Python Custom Operator API that makes it easier than before to extend PyTorch with custom operators that behave like PyTorch’s built-in operators. Operators registered using the new high-level torch.library APIs are guaranteed to be compatible with torch.compile
and other PyTorch subsystems; authoring a custom operator in Python using the previous low-level torch.library APIs required deep understanding of PyTorch internals and has many footguns.
Please see the tutorial for more information.
[Beta] Switching TCPStore’s default server backend to libuv
Introduced a new default server backend for TCPStore built with libuv
which should introduce significantly lower initialization times and better scalability. This should ideally benefit users with a much shorter startup time when accounting for large-scale jobs.
For more information on the motivation + fallback instructions please refer to this tutorial.
Prototype Features
[PROTOTYPE] FSDP2: DTensor-based per-parameter-sharding FSDP
FSDP2 is a new fully sharded data parallelism implementation that uses dim-0 per-parameter sharding to resolve fundamental composability challenges with FSDP1’s flat-parameter sharding.
For more information regarding the motivation / design for FSDP2 please refer to the RFC on Github.
[PROTOTYPE] torch.distributed.pipelining, simplified pipeline parallelism
Pipeline Parallelism is one of the primitive parallelism techniques for deep learning. It allows the execution of a model to be partitioned such that multiple micro-batches can execute different parts of the model code concurrently.
torch.distributed.pipelining
provides a toolkit that allows for easy implementation of pipeline parallelism on general models while also offering composability with other common PyTorch distributed features like DDP, FSDP, or tensor parallel.
For more information on this please refer to our documentation and tutorial.
[PROTOTYPE] Intel GPU is available through source build
Intel GPU in PyTorch on Linux systems offers fundamental functionalities on Intel® Data Center GPU Max Series: eager mode and torch.compile.
For eager mode, the commonly used Aten operators are implemented by using SYCL programming language. The most performance-critical graphs and operators are highly optimized by using oneAPI Deep Neural Network (oneDNN). For torch.compile mode, Intel GPU backend is integrated to Inductor on top of Triton.
For more information for Intel GPU source build please refer to our blog post and documentation.
Performance Improvements
torch.compile optimizations for AWS Graviton (aarch64-linux) processors
AWS optimized the PyTorch torch.compile feature for AWS Graviton3 processors. This optimization results in up to 2x better performance for Hugging Face model inference (based on geomean of performance improvement for 33 models) and up to 1.35x better performance for TorchBench model inference (geomean of performance improvement for 45 models) compared to the default eager mode inference across several natural language processing (NLP), computer vision (CV), and recommendation models on AWS Graviton3-based Amazon EC2 instances.
For more information regarding specific technical details please refer to the blog post.
BF16 symbolic shape optimization in TorchInductor
Pytorch users can now experience improved quality and performance gains with the beta BF16 symbolic shape support. While static shape may afford additional optimization opportunities compared to symbolic shape, it is insufficient for scenarios such as inference services with varying batch size and sequence length, or detection models with data-dependent output shape.
Verification using TorchBench, Huggingface, and timms_model shows a similar pass rate and comparable speedup with the BF16 static shape scenario. Combining the benefits of symbolic shape with BF16 AMX instructions hardware acceleration provided by Intel CPUs and general Inductor CPU backend optimizations applicable to both static and symbolic shape in PyTorch 2.4, the performance for BF16 symbolic shape has significantly improved compared to PyTorch 2.3.
The API to use this feature:
model = ….
model.eval()
with torch.autocast(device_type=”cpu”, dtype=torch.bfloat16), torch.no_grad():
compiled_model = torch.compile(model, dynamic=True)
Performance optimizations for GenAI projects utilizing CPU devices
Highlighting the enhanced performance of PyTorch on CPU, as demonstrated through the optimizations made for the “Segment Anything Fast” and “Diffusion Fast” project. However, only CUDA devices are supported in the model. We have incorporated CPU support into the projects, enabling users to leverage the increased power of CPU for running the project’s experiments. Meanwhile, we have employed a block-wise attention mask for SDPA as well, which can significantly reduce peak memory usage and improve performance. We have also optimized a series of layout propagation rules in Inductor CPU to improve performance.
To facilitate this, we have updated the README file. The API to use this feature is given below, simply providing --device cpu
in the command lines:
-
For Segment Anything Fast:
export SEGMENT_ANYTHING_FAST_USE_FLASH_4=0 python run_experiments.py 16 vit_b <pytorch_github> <segment-anything_github> <path_to_experiments_data> --run-experiments --num-workers 32 --device cpu
-
For Diffusion Fast:
python run_benchmark.py --compile_unet --compile_vae --enable_fused_projections --device=cpu
Users can follow the guidelines to run the experiments and observe the performance improvements firsthand, as well as explore the performance improvement trends across FP32 and BF16 data types.
Additionally, users can achieve good performance using torch.compile
and SDPA. By observing the performance trends across these different factors, users can gain a deeper understanding of how various optimizations enhance PyTorch’s performance on CPU.