Issue #2
Welcome to the second edition of the PyTorch newsletter! In this issue, read about how we celebrated the PyTorch community at the first-ever PyTorch Ecosystem Day (PTED), discover a new podcast for PyTorch developers, and learn about important updates to the PyTorch frontend.
PyTorch Ecosystem Day
Piotr Bialecki (Sr. Software Engineer, NVIDIA) spoke about his journey of using PyTorch and what he sees in the future for PyTorch. Miquel Farré (Sr. Technology Manager, Disney) spoke about the Creative Genome project that uses the PyTorch ecosystem to annotate all Disney content. Ritchie Ng (CEO, Hessian Matrix) spoke about the growth of AI in the Asia Pacific region, and how to get started with PyTorch for production AI use cases. Members of the community showcased how they were using PyTorch via 71 posters and pop-up breakout sessions. See all of the posters and listen to the opening keynote talks here!
PyTorch Developer Podcast
Edward Yang (Research Engineer, Facebook AI) talks about internal development concepts like binding C++ in Python, the dispatcher, PyTorch’s library structure and more. Check out this new series; each episode is around 15 minutes long. Listen to it wherever you get your podcasts.
Forward Mode AD
The core logic for Forward Mode AD (based on “dual tensors”) is now in PyTorch. All the APIs to manipulate such Tensors, codegen and view handling are in master (1.9.0a0)
already. Gradcheck and a first set of formulas will be added in the following month; full support for all PyTorch functions, custom Autograd functions and higher order gradients will happen later this year. Read more about this or share your feedback with @albanD on the corresponding RFC.
Make complex conjugation lazy
PR #54987 makes the conjugate operation on complex tensors return a view that has a special is_conj()
bit flipped. Aside from saving memory by not creating a full tensor, this grants a potential speedup if the following operation can handle conjugated inputs directly. For such operations (like gemm
), a flag is passed to the low-level API; for others the conjugate is materialized before passing to the operation.
torch.use_deterministic_algorithms is stable
torch.use_deterministic_algorithms()
(docs) is stable in master (1.9.0a0)
. If True, the flag switches non-deterministic operations to their deterministic implementation if available, and throws a RuntimeError
if not.
torch.linalg and torch.special
torch.linalg
is now stable; the module maintains fidelity with NumPy’s np.linalg linear algebra functions.
torch.special
(beta) contains functions in scipy.special. Here’s the tracking issue if you’d like to contribute functions to torch.special. If you want a function not already on the list, let us know on the tracking issue about your use case and why it should be added.
Generalizing AMP to work on CPU
@ezyang: Intel is interested in bringing automatic mixed precision to CPU in [RFC] Extend Autocast to CPU/CUDA with BF16 data type · Issue #55374 · pytorch/pytorch · One big question is what the API for autocasting should be for CPU; should we provide a single, generalized API torch.autocast (keep in mind that CPU autocasting would be through bfloat16, while the existing GPU autocasting is via float16), or provide separate APIs for CPU/CUDA? If you have any thoughts or opinions on the subject, please chime in on the issue.
Are you enjoying reading this newsletter? What would you like to know more about? All feedback is welcome and appreciated! To share your suggestions, use this form or simply reply to this email.