Shortcuts

RoboHiveEnv

torchrl.envs.RoboHiveEnv(*args, **kwargs)[source]

A wrapper for RoboHive gym environments.

RoboHive is a collection of environments/tasks simulated with the MuJoCo physics engine exposed using the OpenAI-Gym API.

Github: https://github.com/vikashplus/robohive/

Doc: https://github.com/vikashplus/robohive/wiki

Paper: https://arxiv.org/abs/2310.06828

Warning

RoboHive requires gym 0.13.

Parameters:
  • env_name (str) – the environment name to build. Must be one of available_envs

  • categorical_action_encoding (bool, optional) – if True, categorical specs will be converted to the TorchRL equivalent (torchrl.data.DiscreteTensorSpec), otherwise a one-hot encoding will be used (torchrl.data.OneHotTensorSpec). Defaults to False.

Keyword Arguments:
  • from_pixels (bool, optional) – if True, an attempt to return the pixel observations from the env will be performed. By default, these observations will be written under the "pixels" entry. The method being used varies depending on the gym version and may involve a wrappers.pixel_observation.PixelObservationWrapper. Defaults to False.

  • pixels_only (bool, optional) – if True, only the pixel observations will be returned (by default under the "pixels" entry in the output tensordict). If False, observations (eg, states) and pixels will be returned whenever from_pixels=True. Defaults to True.

  • from_depths (bool, optional) – if True, an attempt to return the depth observations from the env will be performed. By default, these observations will be written under the "depths" entry. Requires from_pixels to be True. Defaults to False.

  • frame_skip (int, optional) – if provided, indicates for how many steps the same action is to be repeated. The observation returned will be the last observation of the sequence, whereas the reward will be the sum of rewards across steps.

  • device (torch.device, optional) – if provided, the device on which the data is to be cast. Defaults to torch.device("cpu").

  • batch_size (torch.Size, optional) – Only torch.Size([]) will work with RoboHiveEnv since vectorized environments are not supported within the class. To execute more than one environment at a time, see ParallelEnv.

  • allow_done_after_reset (bool, optional) – if True, it is tolerated for envs to be done just after reset() is called. Defaults to False.

Variables:

available_envs (list) – a list of available envs to build.

Examples

>>> from torchrl.envs import RoboHiveEnv
>>> env = RoboHiveEnv(RoboHiveEnv.available_envs[0])
>>> env.rollout(3)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources