TorchRL is an open-source Reinforcement Learning (RL) library for PyTorch.
It provides pytorch and python-first, low and high level abstractions for RL that are intended to be efficient, modular, documented and properly tested. The code is aimed at supporting research in RL. Most of it is written in python in a highly modular way, such that researchers can easily swap components, transform them or write new ones with little effort.
This repo attempts to align with the existing pytorch ecosystem libraries in that it has a “dataset pillar” (environments), transforms, models, data utilities (e.g. collectors and containers), etc. TorchRL aims at having as few dependencies as possible (python standard library, numpy and pytorch). Common environment libraries (e.g. OpenAI gym) are only optional.
On the low-level end, torchrl comes with a set of highly re-usable functionals for cost functions, returns and data processing.
TorchRL aims at a high modularity and good runtime performance.
To read more about TorchRL philosophy and capabilities beyond this API reference, check the TorchRL paper.
- API Reference
- torchrl.collectors package
- torchrl.data package
- torchrl.envs package
- torchrl.modules package
- torchrl.objectives package
- torchrl.trainers package
- torchrl._utils package