Shortcuts

Hash

class torchrl.envs.transforms.Hash(in_keys: Sequence[NestedKey], out_keys: Sequence[NestedKey], *, hash_fn: Callable = None, seed: Any | None = None, use_raw_nontensor: bool = False)[source]

Adds a hash value to a tensordict.

Parameters:
  • in_keys (sequence of NestedKey) – the keys of the values to hash.

  • out_keys (sequence of NestedKey) – the keys of the resulting hashes.

  • in_keys_inv (sequence of NestedKey, optional) –

    the keys of the values to hash during inv call.

    Note

    If an inverse map is required, a repertoire Dict[Tuple[int], Any] of hash to value should be passed alongside the list of keys to let the Hash transform know how to recover a value from a given hash. This repertoire isn’t copied, so it can be modified in the same workspace after the transform instantiation and these modifications will be reflected in the map. Missing hashes will be mapped to None.

  • out_keys_inv (sequence of NestedKey, optional) – the keys of the resulting hashes during inv call.

Keyword Arguments:
  • hash_fn (Callable, optional) – the hash function to use. If seed is given, the hash function must accept it as its second argument. Default is Hash.reproducible_hash.

  • seed (optional) – seed to use for the hash function, if it requires one.

  • use_raw_nontensor (bool, optional) – if False, data is extracted from NonTensorData/NonTensorStack inputs before fn is called on them. If True, the raw NonTensorData/NonTensorStack inputs are given directly to fn, which must support those inputs. Default is False.

  • Hash (>>> from torchrl.envs import GymEnv, UnaryTransform,) –

  • GymEnv (>>> env =) –

  • output (>>> # process the string) –

  • env.append_transform( (>>> env =) –

  • UnaryTransform( (...) –

  • in_keys=["observation"], (...) –

  • out_keys=["observation_str"], (...) –

  • tensor (... fn=lambda) – str(tensor.numpy().tobytes())))

  • output

  • env.append_transform(

  • Hash( (...) –

  • in_keys=["observation_str"], (...) –

  • out_keys=["observation_hash"],) (...) –

  • ) (...) –

  • env.observation_spec (>>>) –

  • Composite(

    observation: BoundedContinuous(

    shape=torch.Size([3]), space=ContinuousBox(

    low=Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, contiguous=True), high=Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, contiguous=True)),

    device=cpu, dtype=torch.float32, domain=continuous),

    observation_str: NonTensor(

    shape=torch.Size([]), space=None, device=cpu, dtype=None, domain=None),

    observation_hash: UnboundedDiscrete(

    shape=torch.Size([32]), space=ContinuousBox(

    low=Tensor(shape=torch.Size([32]), device=cpu, dtype=torch.uint8, contiguous=True), high=Tensor(shape=torch.Size([32]), device=cpu, dtype=torch.uint8, contiguous=True)),

    device=cpu, dtype=torch.uint8, domain=discrete),

    device=None, shape=torch.Size([]))

  • env.rollout (>>>) –

  • TensorDict(

    fields={

    action: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False), done: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False), next: TensorDict(

    fields={

    done: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False), observation: Tensor(shape=torch.Size([3, 3]), device=cpu, dtype=torch.float32, is_shared=False), observation_hash: Tensor(shape=torch.Size([3, 32]), device=cpu, dtype=torch.uint8, is_shared=False), observation_str: NonTensorStack(

    [“b’g\x08\x8b\xbexav\xbf\x00\xee(>’”, “b’\x…, batch_size=torch.Size([3]), device=None),

    reward: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False), terminated: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False)},

    batch_size=torch.Size([3]), device=None, is_shared=False),

    observation: Tensor(shape=torch.Size([3, 3]), device=cpu, dtype=torch.float32, is_shared=False), observation_hash: Tensor(shape=torch.Size([3, 32]), device=cpu, dtype=torch.uint8, is_shared=False), observation_str: NonTensorStack(

    [“b’\xb5\x17\x8f\xbe\x88\xccu\xbf\xc0Vr?’”…, batch_size=torch.Size([3]), device=None),

    terminated: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False)},

    batch_size=torch.Size([3]), device=None, is_shared=False)

  • env.check_env_specs() (>>>) –

  • succeeded! ([torchrl][INFO] check_env_specs) –

classmethod reproducible_hash(string, seed=None)[source]

Creates a reproducible 256-bit hash from a string using a seed.

Parameters:
  • string (str or None) – The input string. If None, null string "" is used.

  • seed (str, optional) – The seed value. Default is None.

Returns:

Shape (32,) with dtype torch.uint8.

Return type:

Tensor

state_dict(*args, destination=None, prefix='', keep_vars=False)[source]

Return a dictionary containing references to the whole state of the module.

Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to None are not included.

Note

The returned object is a shallow copy. It contains references to the module’s parameters and buffers.

Warning

Currently state_dict() also accepts positional arguments for destination, prefix and keep_vars in order. However, this is being deprecated and keyword arguments will be enforced in future releases.

Warning

Please avoid the use of argument destination as it is not designed for end-users.

Parameters:
  • destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an OrderedDict will be created and returned. Default: None.

  • prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default: ''.

  • keep_vars (bool, optional) – by default the Tensor s returned in the state dict are detached from autograd. If it’s set to True, detaching will not be performed. Default: False.

Returns:

a dictionary containing a whole state of the module

Return type:

dict

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> module.state_dict().keys()
['bias', 'weight']

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources