Skip to main content
Announcements

NeuralOperatorJoins the PyTorch Ecosystem: Learning in Infinite Dimension with Neural Operators

Today, we are pleased to welcome NeuralOperator to the PyTorch Ecosystem as a dedicated library for learning neural operators in PyTorch to enable applications of AI to Science and Engineering applications. Many of these applications naturally require learning mappings between function spaces, such as solving partial differential equations. This is achieved in practice through the mathematical framework of neural operators.

NeuralOperator turns this mathematical theory into a practical, well-documented PyTorch library that you can use today. It enables researchers and practitioners to apply neural operators to their own problems, and train models that learn maps between functions rather than fixed size tensors, with strong discretization convergence guarantees.  Learn more about the PyTorch Ecosystem in the PyTorch Landscape

About NeuralOperator

NeuralOperator is an open source Python library developed jointly by a core team of researchers from NVIDIA and Caltech. Built on top of PyTorch, it provides a full stack for operator learning. Instead of learning mappings between finite dimensional vectors or images, neural operators learn mappings between function spaces and can be evaluated at arbitrary discretizations, while maintaining consistency at different resolutions. 

NeuralOperator provides easy access to state-of-the-art neural operator models for scientific computing, such as Fourier Neural Operators (FNO), TFNO, SFNO, GINO, UQNO, LocalNO, RNO, and OTNO.

NeuralOperator also provides access to a rich collection of building blocks that can be combined together to form new architectures. All these building blocks inherit from torch.nn.Module, which makes them integrate seamlessly into existing PyTorch pipelines.

NeuralOperator Joining PyTorch Ecosystem

With NeuralOperator joining the PyTorch Ecosystem, PyTorch users gain a powerful, production-ready toolkit for learning in function spaces using standard PyTorch code, which allows them to

  • Build fast surrogates to substantially accelerate expensive PDE solvers and run them at different resolutions without retraining a new model
  • Combine data-driven learning with physics-informed losses using standard PyTorch code
  • Experiment with state-of-the-art neural operator architectures inside the same ecosystem they already use for deep learning

You can try NeuralOperator within a few minutes by visiting https://github.com/neuraloperator/neuraloperator 

We are looking forward to seeing how the community uses NeuralOperator to push the frontier of AI for science and to create new applications that were previously out of reach with conventional deep learning models.