2025 was a defining year for PyTorch Foundation. In May, we announced our expansion into an umbrella foundation and welcomed our first foundation-hosted projects: vLLM and DeepSpeed, alongside PyTorch. In September, Ray joined as a foundation-hosted project, further strengthening the Foundation’s portfolio across training, inference, and distributed compute.
Together, PyTorch, vLLM, DeepSpeed, and Ray reflect how PyTorch Foundation supports a growing set of open source AI projects under neutral governance, while continuing to steward PyTorch as the foundation of the ecosystem.
In all, PyTorch Foundation welcomed three foundation-hosted projects, 14 ecosystem projects, and 16 new members, supported four PyTorch releases (2.6, 2.7, 2.8, and 2.9), and advanced community programs and global events, including the largest PyTorch Conference to date, with more than 3,400 attendees. As we begin 2026, let’s highlight key milestones from 2025 and outline priorities for the year ahead, including insights from Matt White (Executive Director, PyTorch Foundation), Joe Spisak (Product Director, Meta), and Luca Antiga (CTO, Lightning AI and Chair, PyTorch Technical Advisory Council).

PyTorch Foundation Expansion and Foundation-hosted Projects
The PyTorch Foundation’s expansion into an umbrella foundation established a formal governance and support model for open source AI projects used across the ecosystem. This structure enables the Foundation to provide shared infrastructure, neutral governance, and long-term stewardship while supporting a growing portfolio of projects. Further, we welcomed 16 new members, including Snowflake, Dell Technologies, and Qualcomm, reflecting increased industry participation in supporting open source AI under neutral governance.
With the addition of vLLM and DeepSpeed, the Foundation began supporting projects focused on high-throughput inference and distributed training. The addition of Ray extended this support to distributed compute for scalable data processing, training, and inference workflows.
The Linux Foundation’s 2025 Annual Report described this transition as a watershed moment for open source AI, highlighting the importance of aligning training, inference, and distributed compute projects under neutral governance.
Throughout the year, PyTorch Foundation supported the continued evolution of PyTorch through its governance structures, the Technical Advisory Council, and collaboration across member organizations and contributors.
In 2025, the PyTorch project delivered 4 major releases, including PyTorch 2.6, PyTorch 2.7, PyTorch 2.8, and PyTorch 2.9. These releases introduced improvements across compiler technology, distributed training, inference performance, packaging, and hardware support, reflecting ongoing investment in both research and production use cases.
To support transparency and direct engagement with developers, the Foundation hosted live release Q&A sessions and technical webinars throughout the year, including Inside Helion: Live Q&A with the Developers and multiple PyTorch 2.x release Q&As, available through the PyTorch webinars archive.
Looking ahead, the PyTorch Foundation will continue to convene the global community through events in 2026. The hugely successful PyTorch Conference will be expanding to Europe and China, with the inaugural PyTorch Conference EU planned for April 2026 in Paris and PyTorch Conference China planned for September in Beijing. In October, we will be hosting our landmark PyTorch Conference in San Jose, California, taking place October 19-21 in the middle of Open Source AI Week 2026. This event is expected to draw even larger crowds and promises more diverse content and attendance than our past events. Be sure to mark your calendars!

PyTorch Days
In 2025, the PyTorch Foundation introduced PyTorch Day events to complement PyTorch Conference with regionally focused, community-driven programming.
PyTorch Day France was held in Paris in May, co-located with GOSIM AI Paris, and featured a focused program spanning inference optimization, distributed training, robotics, and agentic systems.

PyTorch Day China followed in June in Beijing, co-located with BAAI Conference, featuring a full day of technical talks and posters with strong participation across sessions focused on large-scale training, efficient inference, and hardware-aware optimization.

2026 PyTorch Days
PyTorch Day India 2026 will take place on 7 February in Bengaluru, bringing the regional PyTorch community together for a full day of technical talks, discussions, and poster sessions focused on open source AI and machine learning. The event is hosted by PyTorch Foundation and co-hosted with IBM, NVIDIA, and Red Hat.
Learn more and register: https://pytorch.org/event/pytorch-day-india-2026/
Other PyTorch Days are anticipated in Dubai and China.
Ecosystem Growth and Working Group Updates
The PyTorch Foundation continued to support ecosystem growth through clearer processes and increased visibility into project activity. In 2025, the PyTorch Ecosystem Working Group published regular updates highlighting new projects joining the ecosystem and projects under consideration.
The PyTorch Ecosystem Working Group Update and Project Spotlights, Q4 2025, highlighted new ecosystem projects including FlagGems, Kubeflow Trainer, LMCache, DeepInverse, Feast, NeuralOperator, PINA, and verl, reflecting continued growth across AI infrastructure, LLM serving, reinforcement learning, and scientific machine learning. In total, we welcomed 14 ecosystem projects.
Looking Ahead to 2026
As we enter 2026, our focus remains on supporting PyTorch and the Foundation’s growing set of foundation-hosted projects through neutral governance, shared infrastructure, and global community engagement. We will continue to bring in durable, high-value open source AI projects into the Foundation, grow our ecosystem, expand our programs, including training and certification, and grow our events to help educate and empower the PyTorch community. The perspectives below highlight leadership views on priorities for the year ahead.
Matt White: Executive Director, PyTorch Foundation
In 2025, the PyTorch Foundation proved what an open, neutral home for production-grade AI can look like at global scale. Our expansion into an umbrella foundation and the addition of foundation-hosted projects that span core training and inference pathways strengthened the case for durable, community-governed infrastructure across the AI lifecycle. Just as importantly, we paired that technical and governance momentum with programs that kept developers at the center: clearer ecosystem processes, more transparent engagement, and events that brought practitioners, researchers, and maintainers into the same room to share what’s working, what’s emerging, and what needs to be built next.
In 2026, we will build directly on that foundation with a larger, more connected global footprint. We will host three PyTorch Conferences and three PyTorch Days, designed to serve complementary needs: the conferences will provide broad, high-density programming across the full lifecycle (research to production), while PyTorch Days will remain regionally grounded and community-driven, helping local ecosystems grow their contributor networks and practical adoption. Our flagship PyTorch Conference will move to San Jose at the San Jose Convention Center, enabling a larger venue, expanded content tracks, and a more diversified attendee base across industries, geographies, and levels of experience.
We are also investing in structured learning pathways that meet developers where they are. In 2026, the Foundation will launch PyTorch Associate and PyTorch Professional online training and certifications to support skills development that is both accessible and industry-relevant. The intent is straightforward: make it easier for individuals and teams to build competence in PyTorch and the surrounding ecosystem, while giving employers a clearer signal of practical capability. These programs will complement our events strategy, so that learning can start online, deepen through live engagement, and then translate into real contributions and deployments.
Academic engagement will expand as well. We will continue to sponsor and run workshops at leading research venues, including NeurIPS, MLSys, ICML, and UC Berkeley’s AgentX / Agent Beats to ensure PyTorch remains a collaborative substrate for the next wave of research and reproducible systems work. In parallel, we will grow our Academic and OSPO Outreach program to strengthen relationships with universities, research labs, and open source program offices, and we will launch a new cohort of PyTorch Ambassadors, supporting them with resources to run regional events, foster local communities, and create reliable on-ramps for new contributors.
Technically, our priorities remain anchored in openness, interoperability, and practical impact. We will continue focusing on bringing in high-value, proven projects that extend PyTorch across the AI lifecycle, especially in areas that are rapidly becoming foundational, such as RL, agentic scaffolding, and environments. At the same time, we will grow the PyTorch Ecosystem into new domains, including stronger verticals in AI for science and embodied AI, because the community’s long-term relevance depends on being excellent not just at building models, but at enabling responsible, real-world outcomes across disciplines.
The goal for 2026 is clear: make the PyTorch Foundation the go-to foundation for open source AI, a place where researchers, engineers, developers, and learners can find trusted projects, neutral governance, high-quality education, and a global community that welcomes participation. If you are building with PyTorch, extending the ecosystem, teaching others, or simply looking to learn more about modern AI systems, we encourage you to get involved through events, training, working groups, membership, and contributions across the Foundation’s growing portfolio.
Joe Spisak: Product Director, Meta
I view PyTorch’s evolution as a fundamental shift from being the industry’s leading research framework to becoming the connective foundation of a fully open, production-grade AI ecosystem. For me, PyTorch’s strategic value is no longer about being “just” a deep learning library; it’s about anchoring a modular, community-driven, multi-hardware stack that spans the entire AI lifecycle, from kernel authoring and distributed communication to RL post-training, agentic environments, and edge deployment. I believe our differentiation comes from openness and interoperability: a thriving developer community and first-class support for heterogeneous hardware, from Nvidia and AMD GPUs to TPUs or any other accelerator, while maintaining strong guarantees for developer usability.
As I look ahead to 2026, my focus is squarely on accelerating the industry’s transition to agentic AI systems powered by reinforcement learning at real scale. My priorities are to productize a PyTorch-native agentic and RL stack, drive global standardization around open RL environments, and ensure the surrounding infrastructure (e.g., distributed comms, training systems, and inference engines) can meet the demands of frontier model development and edge execution. I pair this technical trajectory with ecosystem growth: deep open-source collaboration, hardware partnerships, and large-scale developer events that reinforce PyTorch as the platform where the next wave of RL-driven, agentic, and personalized superintelligence will be built.
Luca Antiga: CTO, Lightning AI and Chair, PyTorch Technical Advisory Council
The work of the Technical Advisory Council, which I have had the honor to chair for the past 1.5 years, has significantly increased in 2025. Over the last year, the activities of the TAC have consolidated into five distinct Working Groups.
Two of them are dedicated to taking PyTorch CI infrastructure to a place where costs are optimized and, above all, shared across multiple entities and cloud providers. A little-known fact: running PyTorch CI costs north of USD 1.5M per month! These costs are currently borne by a small number of organizations, Meta and AWS being the biggest contributors, followed by NVIDIA, AMD, and others. The TAC is now working to make it possible for more entities to contribute to these efforts across multiple cloud providers.
The Ecosystem Working Group has revamped the PyTorch Ecosystem (PyTorch Landscape), a collection of active, high-quality projects that PyTorch users can benefit from across different domains. Interestingly, we have seen many contributions coming from science verticals, in addition to cutting-edge projects that power modern generative AI. This is a testament to PyTorch’s ability to serve diverse communities, from cutting-edge research to production workloads. It is certainly a good look for the future, whatever form AI may take in one to several years from now.
The Accelerator Integration Working Group has been focusing on providing design guidance and tooling for entities to enable new hardware accelerators in PyTorch (out-of-core) beyond CUDA and ROCm. With the flexibility of eager mode and its compiler support, PyTorch has become the ideal frontend to novel accelerators. This way, users will be able to leverage amazing innovation in the space of accelerated computing through the familiar PyTorch experience.
Last but not least, the recently formed Security Working Group has begun to surface ongoing work on supply chain-related threats, which require continuous attention, and focused on processes for managing CVEs.
In 2026, PyTorch’s footprint is going to grow both wide (training, inference, RL, agentic AI) and deep (out-of-core accelerators, kernel authoring, distributed infrastructure). Keeping this growth organic across all projects (PyTorch, DeepSpeed, vLLM, and Ray) and centered around developers is the primary goal of the TAC. As an immediate step, we will enhance the visibility of development roadmaps across all contributing organizations and provide support for ongoing cooperation among all PyTorch Foundation projects.
