# functorch.compile (experimental)¶

AOT Autograd is an experimental feature that allows ahead of time capture of forward and backward graphs, and allows easy integration with compilers. This creates an easy to hack Python-based development environment to speedup training of PyTorch models. AOT Autograd currently lives inside functorch.compile namespace.

Warning

AOT Autograd is experimental and the APIs are likely to change. We are looking for feedback. If you are interested in using AOT Autograd and need help or have suggestions, please feel free to open an issue. We will be happy to help.

## Compilation APIs (experimental)¶

 aot_function Traces the forward and backward graph of fn using torch dispatch mechanism, and then compiles the generated forward and backward graphs through fw_compiler and bw_compiler. aot_module Traces the forward and backward graph of mod using torch dispatch tracing mechanism. memory_efficient_fusion Wrapper function over aot_function() and aot_module() to perform memory efficient fusion.

## Partitioners (experimental)¶

 default_partition Partitions the joint_module in a manner that closely resembles the behavior observed in the original .forward() and .backward() of the callable, i.e., the resulting forward graph contains those operators that are executed in the original .forward() callable passed to aot_function(). min_cut_rematerialization_partition Partitions the joint graph such that the backward recomputes the forward.

## Compilers (experimental)¶

 nop Returns the fx_g Fx graph module as it is. ts_compile Compiles the fx_g with Torchscript compiler.