Hash
- class torchrl.envs.transforms.Hash(in_keys: Sequence[NestedKey], out_keys: Sequence[NestedKey], in_keys_inv: Sequence[NestedKey] = None, out_keys_inv: Sequence[NestedKey] = None, *, hash_fn: Callable = None, seed: Any | None = None, use_raw_nontensor: bool = False, repertoire: tuple[tuple[int], Any] = None)[source]
Adds a hash value to a tensordict.
- Parameters:
in_keys (sequence of NestedKey) – the keys of the values to hash.
out_keys (sequence of NestedKey) – the keys of the resulting hashes.
in_keys_inv (sequence of NestedKey, optional) – the keys of the values to hash during inv call.
out_keys_inv (sequence of NestedKey, optional) – the keys of the resulting hashes during inv call.
- Keyword Arguments:
hash_fn (Callable, optional) – the hash function to use. The function signature must be
(input: Any, seed: Any | None) -> torch.Tensor
.seed
is only used if this transform is initialized with theseed
argument. Default isHash.reproducible_hash
.seed (optional) – seed to use for the hash function, if it requires one.
use_raw_nontensor (bool, optional) – if
False
, data is extracted fromNonTensorData
/NonTensorStack
inputs beforefn
is called on them. IfTrue
, the rawNonTensorData
/NonTensorStack
inputs are given directly tofn
, which must support those inputs. Default isFalse
.repertoire (Dict[Tuple[int], Any], optional) – If given, this dict stores the inverse mappings from hashes to inputs. This repertoire isn’t copied, so it can be modified in the same workspace after the transform instantiation and these modifications will be reflected in the map. Missing hashes will be mapped to
None
. Default:None
Hash (>>> from torchrl.envs import GymEnv, UnaryTransform,) –
GymEnv (>>> env =) –
output (>>> # process the string) –
env.append_transform( (>>> env =) –
UnaryTransform( (...) –
in_keys=["observation"], (...) –
out_keys=["observation_str"], (...) –
tensor (... fn=lambda) – str(tensor.numpy().tobytes())))
output –
env.append_transform( –
Hash( (...) –
in_keys=["observation_str"], (...) –
out_keys=["observation_hash"],) (...) –
) (...) –
env.observation_spec (>>>) –
Composite( –
- observation: BoundedContinuous(
shape=torch.Size([3]), space=ContinuousBox(
low=Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, contiguous=True), high=Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, contiguous=True)),
device=cpu, dtype=torch.float32, domain=continuous),
- observation_str: NonTensor(
shape=torch.Size([]), space=None, device=cpu, dtype=None, domain=None),
- observation_hash: UnboundedDiscrete(
shape=torch.Size([32]), space=ContinuousBox(
low=Tensor(shape=torch.Size([32]), device=cpu, dtype=torch.uint8, contiguous=True), high=Tensor(shape=torch.Size([32]), device=cpu, dtype=torch.uint8, contiguous=True)),
device=cpu, dtype=torch.uint8, domain=discrete),
device=None, shape=torch.Size([]))
env.rollout (>>>) –
TensorDict( –
- fields={
action: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False), done: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False), next: TensorDict(
- fields={
done: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False), observation: Tensor(shape=torch.Size([3, 3]), device=cpu, dtype=torch.float32, is_shared=False), observation_hash: Tensor(shape=torch.Size([3, 32]), device=cpu, dtype=torch.uint8, is_shared=False), observation_str: NonTensorStack(
[“b’g\x08\x8b\xbexav\xbf\x00\xee(>’”, “b’\x…, batch_size=torch.Size([3]), device=None),
reward: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False), terminated: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
batch_size=torch.Size([3]), device=None, is_shared=False),
observation: Tensor(shape=torch.Size([3, 3]), device=cpu, dtype=torch.float32, is_shared=False), observation_hash: Tensor(shape=torch.Size([3, 32]), device=cpu, dtype=torch.uint8, is_shared=False), observation_str: NonTensorStack(
[“b’\xb5\x17\x8f\xbe\x88\xccu\xbf\xc0Vr?’”…, batch_size=torch.Size([3]), device=None),
terminated: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
batch_size=torch.Size([3]), device=None, is_shared=False)
env.check_env_specs() (>>>) –
succeeded! ([torchrl][INFO] check_env_specs) –
- get_input_from_hash(hash_tensor)[source]
Look up the input that was given for a particular hash output.
This feature is only available if, during initialization, either the :arg:`repertoire` argument was given or both the :arg:`in_keys_inv` and :arg:`out_keys_inv` arguments were given.
- Parameters:
hash_tensor (Tensor) – The hash output.
- Returns:
The input that the hash was generated from.
- Return type:
Any
- classmethod reproducible_hash(string, seed=None)[source]
Creates a reproducible 256-bit hash from a string using a seed.
- Parameters:
string (str or None) – The input string. If
None
, null string""
is used.seed (str, optional) – The seed value. Default is
None
.
- Returns:
Shape
(32,)
with dtypetorch.uint8
.- Return type:
Tensor
- state_dict(*args, destination=None, prefix='', keep_vars=False)[source]
Return a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.Note
The returned object is a shallow copy. It contains references to the module’s parameters and buffers.
Warning
Currently
state_dict()
also accepts positional arguments fordestination
,prefix
andkeep_vars
in order. However, this is being deprecated and keyword arguments will be enforced in future releases.Warning
Please avoid the use of argument
destination
as it is not designed for end-users.- Parameters:
destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an
OrderedDict
will be created and returned. Default:None
.prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default:
''
.keep_vars (bool, optional) – by default the
Tensor
s returned in the state dict are detached from autograd. If it’s set toTrue
, detaching will not be performed. Default:False
.
- Returns:
a dictionary containing a whole state of the module
- Return type:
dict
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight']