torchrl.modules package¶
TensorDict modules: Actors, exploration, value models and generative models¶
TorchRL offers a series of module wrappers aimed at making it easy to build
RL models from the ground up. These wrappers are exclusively based on
tensordict.nn.TensorDictModule
and tensordict.nn.TensorDictSequential
.
They can loosely be split in three categories:
policies (actors), including exploration strategies,
value model and simulation models (in model-based contexts).
The main features are:
Integration of the specs in your model to ensure that the model output matches what your environment expects as input;
Probabilistic modules that can automatically sample from a chosen distribution and/or return the distribution of interest;
Custom containers for Q-Value learning, model-based agents and others.
TensorDictModules and SafeModules¶
TorchRL SafeModule
allows you to
check the you model output matches what is to be expected for the environment.
This should be used whenever your model is to be recycled across multiple
environments for instance, and when you want to make sure that the outputs
(e.g. the action) always satisfies the bounds imposed by the environment.
Here is an example of how to use that feature with the
Actor
class:
>>> env = GymEnv("Pendulum-v1")
>>> action_spec = env.action_spec
>>> model = nn.LazyLinear(action_spec.shape[-1])
>>> policy = Actor(model, in_keys=["observation"], spec=action_spec, safe=True)
The safe
flag ensures that the output is always within the bounds of the
action_spec
domain: if the network output violates these bounds it will be
projected (in a L1-manner) into the desired domain.
|
General class for deterministic actors in RL. |
|
A wrapper around a multi-action actor. |
|
|
|
A safe sequence of TensorDictModules. |
|
A Tanh module for deterministic policies with bounded action space. |
Exploration wrappers¶
To efficiently explore the environment, TorchRL proposes a series of wrappers
that will override the action sampled by the policy by a noisier version.
Their behaviour is controlled by exploration_mode()
:
if the exploration is set to "random"
, the exploration is active. In all
other cases, the action written in the tensordict is simply the network output.
|
Additive Gaussian PO module. |
|
Additive Gaussian PO wrapper. |
|
Epsilon-Greedy exploration module. |
|
[Deprecated] Epsilon-Greedy PO wrapper. |
|
Ornstein-Uhlenbeck exploration policy module. |
|
Ornstein-Uhlenbeck exploration policy wrapper. |
Probabilistic actors¶
Some algorithms such as PPO require a probabilistic policy to be implemented. In TorchRL, these policies take the form of a model, followed by a distribution constructor.
Note
The choice of a probabilistic or regular actor class depends on the algorithm that is being implemented. On-policy algorithms usually require a probabilistic actor, off-policy usually have a deterministic actor with an extra exploration strategy. There are, however, many exceptions to this rule.
The model reads an input (typically some observation from the environment)
and outputs the parameters of a distribution, while the distribution constructor
reads these parameters and gets a random sample from the distribution and/or
provides a torch.distributions.Distribution
object.
>>> from tensordict.nn import NormalParamExtractor, TensorDictSequential
>>> from torch.distributions import Normal
>>> env = GymEnv("Pendulum-v1")
>>> action_spec = env.action_spec
>>> model = nn.Sequential(nn.LazyLinear(action_spec.shape[-1] * 2), NormalParamExtractor())
>>> # build the first module, which maps the observation on the mean and sd of the normal distribution
>>> model = TensorDictModule(model, in_keys=["observation"], out_keys=["loc", "scale"])
>>> # build the distribution constructor
>>> prob_module = SafeProbabilisticModule(
... in_keys=["loc", "scale"],
... out_keys=["action"],
... distribution_class=Normal,
... return_log_prob=True,
... spec=action_spec,
... )
>>> policy = TensorDictSequential(model, prob_module)
>>> # execute a rollout
>>> env.rollout(3, policy)
To facilitate the construction of probabilistic policies, we provide a dedicated
ProbabilisticActor
:
>>> policy = ProbabilisticActor(
... model,
... in_keys=["loc", "scale"],
... out_keys=["action"],
... distribution_class=Normal,
... return_log_prob=True,
... spec=action_spec,
... )
which alleviates the need to specify a constructor and putting it with the module in a sequence.
Outputs of this policy will contain a "loc"
and "scale"
entries, an
"action"
sampled according to the normal distribution and the log-probability
of this action.
|
General class for probabilistic actors in RL. |
|
|
|
|
Q-Value actors¶
Q-Value actors are a special type of policy that does not directly predict an action from an observation, but picks the action that maximised the value (or quality) of a (s,a) -> v map. This map can be a table or a function. For discrete action spaces with continuous (or near-continuous such as pixels) states, it is customary to use a non-linear model such as a neural network for the map. The semantic of the Q-Value network is hopefully quite simple: we just need to feed a tensor-to-tensor map that given a certain state (the input tensor), outputs a list of action values to choose from. The wrapper will write the resulting action in the input tensordict along with the list of action values.
>>> import torch
>>> from tensordict import TensorDict
>>> from tensordict.nn.functional_modules import make_functional
>>> from torch import nn
>>> from torchrl.data import OneHotDiscreteTensorSpec
>>> from torchrl.modules.tensordict_module.actors import QValueActor
>>> td = TensorDict({'observation': torch.randn(5, 3)}, [5])
>>> # we have 4 actions to choose from
>>> action_spec = OneHotDiscreteTensorSpec(4)
>>> # the model reads a state of dimension 3 and outputs 4 values, one for each action available
>>> module = nn.Linear(3, 4)
>>> qvalue_actor = QValueActor(module=module, spec=action_spec)
>>> qvalue_actor(td)
>>> print(td)
TensorDict(
fields={
action: Tensor(shape=torch.Size([5, 4]), device=cpu, dtype=torch.int64, is_shared=False),
action_value: Tensor(shape=torch.Size([5, 4]), device=cpu, dtype=torch.float32, is_shared=False),
chosen_action_value: Tensor(shape=torch.Size([5, 1]), device=cpu, dtype=torch.float32, is_shared=False),
observation: Tensor(shape=torch.Size([5, 3]), device=cpu, dtype=torch.float32, is_shared=False)},
batch_size=torch.Size([5]),
device=None,
is_shared=False)
Distributional Q-learning is slightly different: in this case, the value network
does not output a scalar value for each state-action value.
Instead, the value space is divided in a an arbitrary number of “bins”. The
value network outputs a probability that the state-action value belongs to one bin
or another.
Hence, for a state space of dimension M, an action space of dimension N and a number of bins B,
the value network encodes a \(\mathbb{R}^{M} \rightarrow \mathbb{R}^{N \times B}\)
map. The following example shows how this works in TorchRL with the DistributionalQValueActor
class:
>>> import torch
>>> from tensordict import TensorDict
>>> from torch import nn
>>> from torchrl.data import OneHotDiscreteTensorSpec
>>> from torchrl.modules import DistributionalQValueActor, MLP
>>> td = TensorDict({'observation': torch.randn(5, 4)}, [5])
>>> nbins = 3
>>> # our model reads the observation and outputs a stack of 4 logits (one for each action) of size nbins=3
>>> module = MLP(out_features=(nbins, 4), depth=2)
>>> action_spec = OneHotDiscreteTensorSpec(4)
>>> qvalue_actor = DistributionalQValueActor(module=module, spec=action_spec, support=torch.arange(nbins))
>>> td = qvalue_actor(td)
>>> print(td)
TensorDict(
fields={
action: Tensor(shape=torch.Size([5, 4]), device=cpu, dtype=torch.int64, is_shared=False),
action_value: Tensor(shape=torch.Size([5, 3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
observation: Tensor(shape=torch.Size([5, 4]), device=cpu, dtype=torch.float32, is_shared=False)},
batch_size=torch.Size([5]),
device=None,
is_shared=False)
|
A Q-Value actor class. |
|
Q-Value TensorDictModule for Q-value policies. |
|
A Distributional DQN actor class. |
|
Distributional Q-Value hook for Q-value policies. |
Value operators and joined models¶
TorchRL provides a series of value operators that wrap value networks to
soften the interface with the rest of the library.
The basic building block is torchrl.modules.tensordict_module.ValueOperator
:
given an input state (and possibly action), it will automatically write a "state_value"
(or "state_action_value"
) in the tensordict, depending on what the input is.
As such, this class accounts for both value and quality networks.
Three classes are also proposed to group together a policy and a value network.
The ActorCriticOperator
is an joined actor-quality network with shared parameters:
it reads an observation, pass it through a
common backbone, writes a hidden state, feeds this hidden state to the policy,
then takes the hidden state and the action and provides the quality of the state-action
pair.
The ActorValueOperator
is a joined actor-value network with shared parameters:
it reads an observation, pass it through a
common backbone, writes a hidden state, feeds this hidden state to the policy
and value modules to output an action and a state value.
Finally, the ActorCriticWrapper
is a joined actor and value network
without shared parameters. It is mainly intended as a replacement for
ActorValueOperator
when a script needs to account for both options.
>>> actor = make_actor()
>>> value = make_value()
>>> if shared_params:
... common = make_common()
... model = ActorValueOperator(common, actor, value)
... else:
... model = ActorValueOperator(actor, value)
>>> policy = model.get_policy_operator() # will work in both cases
|
Actor-critic operator. |
|
Actor-value operator without common module. |
|
Actor-value operator. |
|
General class for value functions in RL. |
|
Inference Action Wrapper for the Decision Transformer. |
Domain-specific TensorDict modules¶
These modules include dedicated solutions for MBRL or RLHF pipelines.
|
Builds an Actor-Value operator from an huggingface-like *LMHeadModel. |
|
World model wrapper. |
Hooks¶
The Q-value hooks are used by the QValueActor
and DistributionalQValueActor
modules and those should be preferred in general as they are easier to create
and use.
|
Q-Value hook for Q-value policies. |
|
Distributional Q-Value hook for Q-value policies. |
Models¶
TorchRL provides a series of useful “regular” (ie non-tensordict) nn.Module classes for RL usage.
Regular modules¶
|
A multi-layer perceptron. |
|
A convolutional neural network. |
|
A 3D-convolutional neural network. |
|
Squeezing layer. |
Squeezing layer for convolutional neural networks. |
Algorithm-specific modules¶
These networks implement sub-networks that have shown to be useful for specific algorithms, such as DQN, DDPG or Dreamer.
|
Decision Transformer Actor class. |
|
DDPG Convolutional Actor class. |
|
DDPG Convolutional Q-value class. |
|
DDPG Actor class. |
|
DDPG Q-value MLP class. |
|
Online Decion Transformer. |
|
Distributional Deep Q-Network softmax layer. |
|
Dreamer actor network. |
|
Dueling CNN Q-network. |
|
A gated recurrent unit (GRU) cell that performs the same operation as nn.LSTMCell but is fully coded in Python. |
|
A PyTorch module for executing multiple steps of a multi-layer GRU. |
|
An embedder for an GRU module. |
|
A long short-term memory (LSTM) cell that performs the same operation as nn.LSTMCell but is fully coded in Python. |
|
A PyTorch module for executing multiple steps of a multi-layer LSTM. |
|
An embedder for an LSTM module. |
|
Observation decoder network. |
|
Observation encoder network. |
|
Online Decision Transformer Actor class. |
|
The posterior network of the RSSM. |
|
The prior network of the RSSM. |
Multi-agent-specific modules¶
These networks implement models that can be used in multi-agent contexts.
They use vmap()
to execute multiple networks all at once on the
network inputs. Because the parameters are batched, initialization may differ
from what is usually done with other PyTorch modules, see
get_stateful_net()
for more information.
|
A base class for multi-agent networks. |
|
Mult-agent MLP. |
|
Multi-agent CNN. |
|
QMix mixer. |
|
Value-Decomposition Network mixer. |
Exploration¶
Noisy linear layers are a popular way of exploring the environment without altering the actions, but by integrating the stochasticity in the weight configuration.
|
Noisy Linear Layer. |
|
Noisy Lazy Linear Layer. |
|
Resets the noise of noisy layers. |
Planners¶
|
CEMPlanner Module. |
|
MPCPlannerBase abstract Module. |
|
MPPI Planner Module. |
Distributions¶
Some distributions are typically used in RL scripts.
|
Delta distribution. |
|
Implements a Normal distribution with location scaling. |
|
A wrapper for normal distribution parameters. |
|
Implements a TanhNormal distribution with location scaling. |
|
Implements a Truncated Normal distribution with location scaling. |
|
Implements a Tanh transformed_in Delta distribution. |
|
One-hot categorical distribution. |
|
MaskedCategorical distribution. |
|
MaskedCategorical distribution. |
Utils¶
|
Given an input string, returns a surjective function f(x): R -> R^+. |
|
Inverse softplus function. |
|
A biased softplus module. |
|
Get all tensordict primers from all submodules of a module. |
|
A TensorDictModule wrapper to vmap over the input. |