The AI landscape is quickly evolving, with AI models being deployed beyond server to edge devices such as mobile phones, wearables, AR/VR/MR and embedded devices. PyTorch Edge extends PyTorch's research-to-production stack to these edge devices and paves the way for building innovative, privacy-aware experiences with superior productivity, portability, and performance, optimized for these diverse hardware platforms.
PyTorch on Edge - From PyTorch Mobile to ExecuTorch
In 2019, we announced PyTorch Mobile powered by TorchScript to address the ever-growing need for edge devices to execute AI models. To advance our PyTorch Edge offerings even further, we developed ExecuTorch. ExecuTorch facilitates PyTorch inference on edge devices while supporting portability across hardware platforms with lower runtime and framework tax. ExecuTorch was developed collaboratively between industry leaders including Meta, Arm, Apple, and Qualcomm.
PyTorch Mobile allowed users to stay in the PyTorch ecosystem from training to model deployment. However, the lack of consistent PyTorch semantics used across these and the focus on TorchScript inhibited the developer experience and slowed down research to production. PyTorch Mobile also didn’t provide well-defined entry points for third-party integration and optimizations, which we’ve addressed with ExecuTorch.
We’ve renewed our commitment to on-device AI with ExecuTorch. This extends our ecosystem in a much more “in the spirit of PyTorch” way, with productivity, hackability, and extensibility as critical components. We look forward to supporting edge and embedded applications with low latency, strong privacy, and innovation on the edge.