torch.use_deterministic_algorithms¶
- torch.use_deterministic_algorithms(mode, *, warn_only=False)[source]¶
Sets whether PyTorch operations must use “deterministic” algorithms. That is, algorithms which, given the same input, and when run on the same software and hardware, always produce the same output. When enabled, operations will use deterministic algorithms when available, and if only nondeterministic algorithms are available they will throw a
RuntimeError
when called.Note
This setting alone is not always enough to make an application reproducible. Refer to Reproducibility for more information.
Note
torch.set_deterministic_debug_mode()
offers an alternative interface for this feature.The following normally-nondeterministic operations will act deterministically when
mode=True
:torch.nn.Conv1d
when called on CUDA tensortorch.nn.Conv2d
when called on CUDA tensortorch.nn.Conv3d
when called on CUDA tensortorch.nn.ConvTranspose1d
when called on CUDA tensortorch.nn.ConvTranspose2d
when called on CUDA tensortorch.nn.ConvTranspose3d
when called on CUDA tensortorch.nn.ReplicationPad2d
when attempting to differentiate a CUDA tensortorch.bmm()
when called on sparse-dense CUDA tensorstorch.Tensor.__getitem__()
when attempting to differentiate a CPU tensor and the index is a list of tensorstorch.Tensor.index_put()
withaccumulate=False
torch.Tensor.index_put()
withaccumulate=True
when called on a CPU tensortorch.Tensor.put_()
withaccumulate=True
when called on a CPU tensortorch.Tensor.scatter_add_()
when called on a CUDA tensortorch.gather()
when called on a CUDA tensor that requires gradtorch.index_add()
when called on CUDA tensortorch.index_select()
when attempting to differentiate a CUDA tensortorch.repeat_interleave()
when attempting to differentiate a CUDA tensortorch.Tensor.index_copy()
when called on a CPU or CUDA tensortorch.Tensor.scatter()
when src type is Tensor and called on CUDA tensortorch.Tensor.scatter_reduce()
whenreduce='sum'
orreduce='mean'
and called on CUDA tensor
The following normally-nondeterministic operations will throw a
RuntimeError
whenmode=True
:torch.nn.AvgPool3d
when attempting to differentiate a CUDA tensortorch.nn.AdaptiveAvgPool2d
when attempting to differentiate a CUDA tensortorch.nn.AdaptiveAvgPool3d
when attempting to differentiate a CUDA tensortorch.nn.MaxPool3d
when attempting to differentiate a CUDA tensortorch.nn.AdaptiveMaxPool2d
when attempting to differentiate a CUDA tensortorch.nn.FractionalMaxPool2d
when attempting to differentiate a CUDA tensortorch.nn.FractionalMaxPool3d
when attempting to differentiate a CUDA tensortorch.nn.functional.interpolate()
when attempting to differentiate a CUDA tensor and one of the following modes is used:linear
bilinear
bicubic
trilinear
torch.nn.ReflectionPad1d
when attempting to differentiate a CUDA tensortorch.nn.ReflectionPad2d
when attempting to differentiate a CUDA tensortorch.nn.ReflectionPad3d
when attempting to differentiate a CUDA tensortorch.nn.ReplicationPad1d
when attempting to differentiate a CUDA tensortorch.nn.ReplicationPad3d
when attempting to differentiate a CUDA tensortorch.nn.NLLLoss
when called on a CUDA tensortorch.nn.CTCLoss
when attempting to differentiate a CUDA tensortorch.nn.EmbeddingBag
when attempting to differentiate a CUDA tensor whenmode='max'
torch.Tensor.put_()
whenaccumulate=False
torch.Tensor.put_()
whenaccumulate=True
and called on a CUDA tensortorch.histc()
when called on a CUDA tensortorch.bincount()
when called on a CUDA tensor andweights
tensor is giventorch.kthvalue()
with called on a CUDA tensortorch.median()
with indices output when called on a CUDA tensortorch.nn.functional.grid_sample()
when attempting to differentiate a CUDA tensortorch.cumsum()
when called on a CUDA tensor when dtype is floating point or complextorch.Tensor.scatter_reduce()
whenreduce='prod'
and called on CUDA tensortorch.Tensor.resize_()
when called with a quantized tensor
In addition, several operations fill uninitialized memory when this setting is turned on and when
torch.utils.deterministic.fill_uninitialized_memory
is turned on. See the documentation for that attribute for more information.A handful of CUDA operations are nondeterministic if the CUDA version is 10.2 or greater, unless the environment variable
CUBLAS_WORKSPACE_CONFIG=:4096:8
orCUBLAS_WORKSPACE_CONFIG=:16:8
is set. See the CUDA documentation for more details: https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility If one of these environment variable configurations is not set, aRuntimeError
will be raised from these operations when called with CUDA tensors:Note that deterministic operations tend to have worse performance than nondeterministic operations.
Note
This flag does not detect or prevent nondeterministic behavior caused by calling an inplace operation on a tensor with an internal memory overlap or by giving such a tensor as the
out
argument for an operation. In these cases, multiple writes of different data may target a single memory location, and the order of writes is not guaranteed.- Parameters
mode (
bool
) – If True, makes potentially nondeterministic operations switch to a deterministic algorithm or throw a runtime error. If False, allows nondeterministic operations.- Keyword Arguments
warn_only (
bool
, optional) – If True, operations that do not have a deterministic implementation will throw a warning instead of an error. Default:False
Example:
>>> torch.use_deterministic_algorithms(True) # Forward mode nondeterministic error >>> torch.randn(10, device='cuda').kthvalue(1) ... RuntimeError: kthvalue CUDA does not have a deterministic implementation... # Backward mode nondeterministic error >>> torch.nn.AvgPool3d(1)(torch.randn(3, 4, 5, 6, requires_grad=True).cuda()).sum().backward() ... RuntimeError: avg_pool3d_backward_cuda does not have a deterministic implementation...