torch.random¶
- torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices', device_type='cuda')[source][source]¶
Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in.
- Parameters
devices (iterable of Device IDs) – devices for which to fork the RNG. CPU RNG state is always forked. By default,
fork_rng()
operates on all devices, but will emit a warning if your machine has a lot of devices, since this function will run very slowly in that case. If you explicitly specify devices, this warning will be suppressedenabled (bool) – if
False
, the RNG is not forked. This is a convenience argument for easily disabling the context manager without having to delete it and unindent your Python code under it.device_type (str) – device type str, default is cuda. As for custom device, see details in [Note: support the custom device with privateuse1]
- Return type
- torch.random.get_rng_state()[source][source]¶
Returns the random number generator state as a torch.ByteTensor.
Note
The returned state is for the default generator on CPU only.
See also:
torch.random.fork_rng()
.- Return type
- torch.random.initial_seed()[source][source]¶
Returns the initial seed for generating random numbers as a Python long.
Note
The returned seed is for the default generator on CPU only.
- Return type
- torch.random.manual_seed(seed)[source][source]¶
Sets the seed for generating random numbers on all devices. Returns a torch.Generator object.
- torch.random.seed()[source][source]¶
Sets the seed for generating random numbers to a non-deterministic random number on all devices. Returns a 64 bit number used to seed the RNG.
- Return type
- torch.random.set_rng_state(new_state)[source][source]¶
Sets the random number generator state.
Note
This function only works for CPU. For CUDA, please use
torch.manual_seed()
, which works for both CPU and CUDA.- Parameters
new_state (torch.ByteTensor) – The desired state