The PyTorch Ecosystem Working Group is happy to welcome several new projects to the PyTorch Ecosystem Landscape including PhysicsNeMo, Unsloth, ONNX, and KTransformers. The PyTorch Ecosystem Landscape is a map of the innovative open source AI projects that extend, integrate with, or build upon PyTorch. Welcome to the newest PyTorch Ecosystem Landscape projects!
New Additions to the PyTorch Ecosystem
PhysicsNeMo
NVIDIA PhysicsNeMo is an open source, PyTorch-based framework designed to accelerate the development of AI Physics models – Physics aware AI surrogate models for scientific and engineering applications. Built to seamlessly integrate with the existing PyTorch ecosystem, it provides a comprehensive toolkit for building, training, fine-tuning and evaluating surrogate models that combine physical laws (via Physics and Geometry Informed) with simulation data. PhysicsNeMo allows developers to create high-fidelity surrogate models and perform real-time simulations for complex domains such as computational fluid dynamics (CFD), structural mechanics, and climate science without needing deep expertise in traditional numerical solvers.
For PyTorch developers, PhysicsNeMo offers a familiar and modular developer experience, featuring a library of optimized, pre-trained architectures—including Neural Operators, GNNs, Transformers, Diffusion based, PINNs—along with specialized data pipelines, sampling algorithms and loss functions tailored for physics constraints. The framework is engineered for enterprise-scale performance to address large input domains, enabling users to scale training effortlessly from a single GPU to multi-node clusters. With its open and extensible design, PhysicsNeMo empowers the community to rapidly prototype and deploy physics-aware AI solutions while maintaining full compatibility with standard PyTorch libraries and tools.
Learn more about PhysicsNeMo.
Unsloth
Unsloth is an open source framework for running, training and reinforcement learning for open models on local and data center hardware. Unsloth has a webUI and supports 500+ models across text, vision, audio/TTS, embeddings, and RL, with workflows from LoRA / QLoRA to full fine-tuning, continued pretraining, and FP8 training. It also includes automatic dataset creation from PDFs, CSVs, and DOCX files, training observability, and export to runtimes like GGUF/llama.cpp and vLLM. Available as both a code-based format: Unsloth Core and the webUI format, Unsloth Studio runs on MacOS, Windows, Linux and WSL.
With custom Triton and math kernels, Unsloth is designed to improve training speed and memory efficiency with no accuracy degradation. Unsloth utilizes PyTorch to enable training of models and torch.compile for optimizations. Unsloth also works with TorchAO or quantization-aware training, FP8 RL and ExecuTorch f, for mobile deployment and other functionalities.
Learn more about Unsloth.
ONNX
ONNX (Open Neural Network Exchange) is an open standard for representing machine learning models, enabling interoperability across frameworks, compilers, and runtimes. Originally created by Facebook (now Meta) and Microsoft in 2017, ONNX has grown into a vendor-neutral ecosystem hosted by the Linux Foundation, with broad industry adoption spanning cloud providers, hardware vendors, and research institutions. The ONNX specification defines a common set of operators and a standard file format, allowing developers to train models in their preferred framework and deploy them across a wide range of platforms and devices without manual conversion.
As part of the PyTorch ecosystem, ONNX plays a key role in bridging the gap between model development and production deployment. PyTorch’s built-in torch.onnx.export functionality allows users to seamlessly convert their models to the ONNX format, unlocking access to a rich ecosystem of optimizing compilers and inference runtimes — from cloud servers to edge devices. With an active open source community, a quarterly release cadence, and ongoing work on new operators, quantization support, and large model handling, ONNX continues to evolve as a critical piece of the ML infrastructure stack.
Learn more about ONNX.
KTransformers
KTransformers is an open source project launched by Tsinghua University and Approaching.AI, designed to deploy LLMs in low-VRAM scenarios. As MoE architectures scale to hundreds of billions — or even trillions — of parameters, the assumption that all model components must reside on GPUs becomes a primary barrier to accessibility. Most MoE experts are activated sparsely per token, yet conventional serving systems bind all expert weights to GPU memory, wasting the majority of expensive accelerator capacity on idle parameters.
KTransformers breaks this binding through CPU-GPU heterogeneous computing, combining fine-grained expert offloading, high-performance CPU kernels, NUMA-aware execution, and dynamic expert scheduling. It has also been publicly recommended by several leading open source model teams, including Kimi K2.5, Qwen3.5, and MiniMax-M2.5, and has been used to provide day-0 heterogeneous inference support for major MoE model releases.
Learn more about KTransformers.
How to Join the PyTorch Ecosystem Landscape
If you’re developing a project that supports the PyTorch community, you’re welcome to apply for inclusion in the Ecosystem landscape. Please review the PyTorch Ecosystem landscape review process to ensure that you meet the minimum expectations before applying.