Introduction to ONNX¶
Authors: Thiago Crepaldi,
Open Neural Network eXchange (ONNX) is an open standard
format for representing machine learning models. The
torch.onnx module provides APIs to
capture the computation graph from a native PyTorch
torch.nn.Module model and convert
it into an ONNX graph.
Currently, there are two flavors of ONNX exporter APIs,
but this tutorial will focus on the
The TorchDynamo engine is leveraged to hook into Python’s frame evaluation API and dynamically rewrite its bytecode into an FX graph. The resulting FX Graph is polished before it is finally translated into an ONNX graph.
The main advantage of this approach is that the FX graph is captured using bytecode analysis that preserves the dynamic nature of the model instead of using traditional static tracing techniques.
PyTorch 2.1.0 or newer is required.
The ONNX exporter depends on extra Python packages:
They can be installed through pip:
pip install --upgrade onnx onnxscript
To validate the installation, run the following commands:
import torch print(torch.__version__) import onnxscript print(onnxscript.__version__) from onnxscript import opset18 # opset 18 is the latest (and only) supported version for now import onnxruntime print(onnxruntime.__version__)
Each import must succeed without any errors and the library versions must be printed out.
The list below refers to tutorials that ranges from basic examples to advanced scenarios, not necessarily in the order they are listed. Feel free to jump directly to specific topics of your interest or sit tight and have fun going through all of them to learn all there is about the ONNX exporter.
Total running time of the script: ( 0 minutes 0.000 seconds)