Shortcuts

VmasEnv

torchrl.envs.libs.vmas.VmasEnv(*args, _inplace_update=False, _batch_locked=True, **kwargs)[source]

Vmas environment wrapper.

Examples

>>>  env = VmasEnv(
...      scenario="flocking",
...      num_envs=32,
...      continuous_actions=True,
...      max_steps=200,
...      device="cpu",
...      seed=None,
...      # Scenario kwargs
...      n_agents=5,
...  )
>>>  print(env.rollout(10))
TensorDict(
    fields={
        action: Tensor(torch.Size([5, 32, 10, 2]), dtype=torch.float64),
        done: Tensor(torch.Size([5, 32, 10, 1]), dtype=torch.bool),
        info: TensorDict(
            fields={
                cohesion_rew: Tensor(torch.Size([5, 32, 10, 1]), dtype=torch.float32),
                collision_rew: Tensor(torch.Size([5, 32, 10, 1]), dtype=torch.float32),
                separation_rew: Tensor(torch.Size([5, 32, 10, 1]), dtype=torch.float32),
                velocity_rew: Tensor(torch.Size([5, 32, 10, 1]), dtype=torch.float32)},
            batch_size=torch.Size([5, 32, 10]),
            device=cpu,
            is_shared=False),
        next: TensorDict(
            fields={
                info: TensorDict(
                    fields={
                        cohesion_rew: Tensor(torch.Size([5, 32, 10, 1]), dtype=torch.float32),
                        collision_rew: Tensor(torch.Size([5, 32, 10, 1]), dtype=torch.float32),
                        separation_rew: Tensor(torch.Size([5, 32, 10, 1]), dtype=torch.float32),
                        velocity_rew: Tensor(torch.Size([5, 32, 10, 1]), dtype=torch.float32)},
                    batch_size=torch.Size([5, 32, 10]),
                    device=cpu,
                    is_shared=False),
                observation: Tensor(torch.Size([5, 32, 10, 18]), dtype=torch.float32)},
            batch_size=torch.Size([5, 32, 10]),
            device=cpu,
            is_shared=False),
        observation: Tensor(torch.Size([5, 32, 10, 18]), dtype=torch.float32),
        reward: Tensor(torch.Size([5, 32, 10, 1]), dtype=torch.float32)},
    batch_size=torch.Size([5, 32, 10]),
    device=cpu,
    is_shared=False)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources