MultiAgentNetBase¶
- class torchrl.modules.MultiAgentNetBase(*, n_agents: int, centralized: bool | None = None, share_params: bool | None = None, agent_dim: int | None = None, vmap_randomness: str = 'different', **kwargs)[source]¶
A base class for multi-agent networks.
Note
to initialize the MARL module parameters with the torch.nn.init module, please refer to
get_stateful_net()
andfrom_stateful_net()
methods.- forward(*inputs: Tuple[Tensor]) Tensor [source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- from_stateful_net(stateful_net: Module)[source]¶
Populates the parameters given a stateful version of the network.
See
get_stateful_net()
for details on how to gather a stateful version of the network.- Parameters:
stateful_net (nn.Module) – the stateful network from which the params should be gathered.
- get_stateful_net(copy: bool = True)[source]¶
Returns a stateful version of the network.
This can be used to initialize parameters.
Such networks will often not be callable out-of-the-box and will require a vmap call to be executable.
- Parameters:
copy (bool, optional) – if
True
, a deepcopy of the network is made. Defaults toTrue
.
If the parameters are modified in-place (recommended) there is no need to copy the parameters back into the MARL module. See
from_stateful_net()
for details on how to re-populate the MARL model with parameters that have been re-initialized out-of-place.Examples
>>> from torchrl.modules import MultiAgentMLP >>> import torch >>> n_agents = 6 >>> n_agent_inputs=3 >>> n_agent_outputs=2 >>> batch = 64 >>> obs = torch.zeros(batch, n_agents, n_agent_inputs) >>> mlp = MultiAgentMLP( ... n_agent_inputs=n_agent_inputs, ... n_agent_outputs=n_agent_outputs, ... n_agents=n_agents, ... centralized=False, ... share_params=False, ... depth=2, ... ) >>> snet = mlp.get_stateful_net() >>> def init(module): ... if hasattr(module, "weight"): ... torch.nn.init.kaiming_normal_(module.weight) >>> snet.apply(init) >>> # If the module has been updated out-of-place (not the case here) we can reset the params >>> mlp.from_stateful_net(snet)