# Config options to enable/disable C++ kernel for nn.functional.MHA# and nn.TransformerEncoderimporttorch_is_fastpath_enabled:bool=True
[docs]defget_fastpath_enabled()->bool:"""Returns whether fast path for TransformerEncoder and MultiHeadAttention is enabled, or ``True`` if jit is scripting. ..note: The fastpath might not be run even if ``get_fastpath_enabled`` returns ``True`` unless all conditions on inputs are met. """ifnottorch.jit.is_scripting():return_is_fastpath_enabledreturnTrue
[docs]defset_fastpath_enabled(value:bool)->None:"""Sets whether fast path is enabled"""global_is_fastpath_enabled_is_fastpath_enabled=value
Docs
Access comprehensive developer documentation for PyTorch
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.