Open Neural Network eXchange (ONNX) is an open standard
format for representing machine learning models. The
torch.onnx module captures the computation graph from a
torch.nn.Module model and converts it into an
There are two flavors of ONNX exporter API that you can use, as listed below:
TorchDynamo-based ONNX Exporter¶
The TorchDynamo-based ONNX exporter is the newest (and Beta) exporter for PyTorch 2.0 and newer
TorchDynamo engine is leveraged to hook into Python’s frame evaluation API and dynamically rewrite its bytecode into an FX Graph. The resulting FX Graph is then polished before it is finally translated into an ONNX graph.
The main advantage of this approach is that the FX graph is captured using bytecode analysis that preserves the dynamic nature of the model instead of using traditional static tracing techniques.
TorchScript-based ONNX Exporter¶
The TorchScript-based ONNX exporter is available since PyTorch 1.2.0
As a consequence, the resulting graph has a couple limitations:
It does not record any control-flow, like if-statements or loops;
Does not handle nuances between
Does not truly handle dynamic inputs
As an attempt to support the static tracing limitations, the exporter also supports TorchScript scripting
torch.jit.script()), which adds support for data-dependent control-flow, for example. However, TorchScript
itself is a subset of the Python language, so not all features in Python are supported, such as in-place operations.