Announcements
2025 Docathon Recap
The 2025 PyTorch Docathon brought together 150+ contributors and resulted in over 60 merged PRs across core repositories. Thank you to everyone who helped improve PyTorch documentation. Read our wrap-up blog to learn more.
PyTorch Conference Poster Session CFP Still Open
Submit your poster proposal for PyTorch Conference 2025 by August 1. Share your research, tools, or use cases with the broader community. Learn more about PyTorch Conference 2025.
ICYMI: Spotlight on Ecosystem Projects
The PyTorch Ecosystem Working Group is surfacing mature, community-driven projects making a significant impact. Learn how the process works and explore the first round of spotlights in our recent blog.
Upcoming Events
verl: Flexible and Scalable Reinforcement Learning Library for LLM Reasoning and Tool-Calling
August 6 – Virtual
Verl is a scalable RL framework for LLMs with async rollouts, expert parallelism, and support for PPO/GRPO/DAPO. It tackles tool use, multi-turn reasoning, and MoE scaling (e.g., DeepSeek).
Haibin Lin (ByteDance) optimizes large-scale LLM training; formerly contributed to Apache MXNet. Register today
PyTorch 2.8 Live Release Q&A
August 14 – Virtual
Our PyTorch 2.8 Live Q&A webinar will focus on PyTorch packaging, exploring the release of wheel variant support as a new experimental feature in the 2.8 release. This feature is designed to improve the PyTorch install experience for users once it becomes generally available. Register today
PyTorch Conference 2025
October 22-23 – San Francisco, CA
The Poster Session Call for Proposals is still open. Whether you’re a student, researcher, or practitioner, this is your chance to share your work with the PyTorch community.
Submit a proposal
Community Events:
Recent Events
Read our wrap-up blog to learn more.
Accelerating DLRMv2 Inference on Arm Neoverse CPUs with PyTorch
Session recordings now available
Session recordings now available
Member Round-Up
Meta: Enabling Fully Sharded Data Parallel (FSDP2) in Opacus
Meta, IBM: Reducing Storage Footprint and Bandwidth Usage for Distributed Checkpoints with PyTorch DCP
vLLM, Meta: PyTorch + vLLM = ♥️
Hugging Face, Meta: Presenting Flux Fast: Making Flux go brrr on H100s
Meta: Fault Tolerant Llama: training with 2000 synthetic failures every ~15 seconds and no checkpoints on Crusoe L40S
Member Contributions Power PyTorch
A special thank you to PyTorch Foundation members Meta, AWS, Microsoft, and NVIDIA for supporting our CI infrastructure and ongoing development.
Interested in contributing?
Learn about the Cloud Credits Program
Subscribe
Get updates like these delivered directly to your inbox. Subscribe to the PyTorch newsletter for announcements, event highlights, technical deep dives, and more: https://pytorch.org/newsletter/