• Docs >
  • Dynamo / torch.compile
Shortcuts

Dynamo / torch.compile

Torch-TensorRT provides a backend for the new torch.compile API released in PyTorch 2.0. In the following examples we describe a number of ways you can leverage this backend to accelerate inference.

Torch Compile Stable Diffusion

Torch Compile Stable Diffusion

Torch Export with Cudagraphs

Torch Export with Cudagraphs

Refitting Torch-TensorRT Programs with New Weights

Refitting Torch-TensorRT Programs with New Weights

Compiling a Transformer using torch.compile and TensorRT

Compiling a Transformer using torch.compile and TensorRT

Compiling GPT2 using the Torch-TensorRT with dynamo backend

Compiling GPT2 using the Torch-TensorRT with dynamo backend

Torch Compile Advanced Usage

Torch Compile Advanced Usage

Compiling Llama2 using the Torch-TensorRT with dynamo backend

Compiling Llama2 using the Torch-TensorRT with dynamo backend

Engine Caching (BERT)

Engine Caching (BERT)

Mutable Torch TensorRT Module

Mutable Torch TensorRT Module

Compiling ResNet using the Torch-TensorRT torch.compile Backend

Compiling ResNet using the Torch-TensorRT torch.compile Backend

Deploy Quantized Models using Torch-TensorRT

Deploy Quantized Models using Torch-TensorRT

Engine Caching

Engine Caching

Using Custom Kernels within TensorRT Engines with Torch-TensorRT

Using Custom Kernels within TensorRT Engines with Torch-TensorRT

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources