API Reference¶
- torchrl.collectors package
- torchrl.data package
- torchrl.envs package
- EnvBase
- GymLikeEnv
- EnvMetaData
- Vectorized envs
- Custom native TorchRL environments
- Multi-agent environments
- Auto-resetting Envs
- Dynamic Specs
- Transforms
- Environments with masked actions
- Recorders
- Helpers
- Domain-specific
- Libraries
- BraxEnv
- BraxWrapper
- DMControlEnv
- DMControlWrapper
- GymEnv
- GymWrapper
- HabitatEnv
- IsaacGymEnv
- IsaacGymWrapper
- JumanjiEnv
- JumanjiWrapper
- MeltingpotEnv
- MeltingpotWrapper
- MOGymEnv
- MOGymWrapper
- MultiThreadedEnv
- MultiThreadedEnvWrapper
- OpenMLEnv
- PettingZooEnv
- PettingZooWrapper
- RoboHiveEnv
- SMACv2Env
- SMACv2Wrapper
- VmasEnv
- VmasWrapper
- gym_backend
- set_gym_backend
- torchrl.modules package
- torchrl.objectives package
- torch.vmap and randomness
- Training value functions
- DQN
- DDPG
- SAC
- REDQ
- CrossQ
- IQL
- CQL
- DT
- TD3
- TD3+BC
- PPO
- A2C
- Reinforce
- Dreamer
- Multi-agent objectives
- Returns
- ValueEstimatorBase
- TD0Estimator
- TD1Estimator
- TDLambdaEstimator
- GAE
- td0_return_estimate
- td0_advantage_estimate
- td1_return_estimate
- vec_td1_return_estimate
- td1_advantage_estimate
- vec_td1_advantage_estimate
- td_lambda_return_estimate
- vec_td_lambda_return_estimate
- td_lambda_advantage_estimate
- vec_td_lambda_advantage_estimate
- generalized_advantage_estimate
- vec_generalized_advantage_estimate
- reward2go
- Utils
- torchrl.trainers package
- torchrl._utils package