Shortcuts

UnityMLAgentsWrapper

torchrl.envs.UnityMLAgentsWrapper(*args, **kwargs)[source]

Unity ML-Agents environment wrapper.

GitHub: https://github.com/Unity-Technologies/ml-agents

Documentation: https://unity-technologies.github.io/ml-agents/Python-LLAPI/

Parameters:

env (mlagents_envs.environment.UnityEnvironment) – the ML-Agents environment to wrap.

Keyword Arguments:
  • device (torch.device, optional) – if provided, the device on which the data is to be cast. Defaults to None.

  • batch_size (torch.Size, optional) – the batch size of the environment. Defaults to torch.Size([]).

  • allow_done_after_reset (bool, optional) – if True, it is tolerated for envs to be done just after reset() is called. Defaults to False.

  • group_map (MarlGroupMapType or Dict[str, List[str]]], optional) – how to group agents in tensordicts for input/output. See MarlGroupMapType for more info. If not specified, agents are grouped according to the group ID given by the Unity environment. Defaults to None.

  • categorical_actions (bool, optional) – if True, categorical specs will be converted to the TorchRL equivalent (torchrl.data.Categorical), otherwise a one-hot encoding will be used (torchrl.data.OneHot). Defaults to False.

Variables:

available_envs – list of registered environments available to build

Examples

>>> from mlagents_envs.environment import UnityEnvironment
>>> base_env = UnityEnvironment()
>>> from torchrl.envs import UnityMLAgentsWrapper
>>> env = UnityMLAgentsWrapper(base_env)
>>> td = env.reset()
>>> td = env.step(td.update(env.full_action_spec.rand()))

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources