[docs]defis_available():r"""Return whether PyTorch is built with MKL-DNN support."""returntorch._C._has_mkldnn
VERBOSE_OFF=0VERBOSE_ON=1VERBOSE_ON_CREATION=2
[docs]classverbose:""" On-demand oneDNN (former MKL-DNN) verbosing functionality. To make it easier to debug performance issues, oneDNN can dump verbose messages containing information like kernel size, input data size and execution duration while executing the kernel. The verbosing functionality can be invoked via an environment variable named `DNNL_VERBOSE`. However, this methodology dumps messages in all steps. Those are a large amount of verbose messages. Moreover, for investigating the performance issues, generally taking verbose messages for one single iteration is enough. This on-demand verbosing functionality makes it possible to control scope for verbose message dumping. In the following example, verbose messages will be dumped out for the second inference only. .. highlight:: python .. code-block:: python import torch model(data) with torch.backends.mkldnn.verbose(torch.backends.mkldnn.VERBOSE_ON): model(data) Args: level: Verbose level - ``VERBOSE_OFF``: Disable verbosing - ``VERBOSE_ON``: Enable verbosing - ``VERBOSE_ON_CREATION``: Enable verbosing, including oneDNN kernel creation """def__init__(self,level):self.level=leveldef__enter__(self):ifself.level==VERBOSE_OFF:returnst=torch._C._verbose.mkldnn_set_verbose(self.level)assert(st),"Failed to set MKLDNN into verbose mode. Please consider to disable this verbose scope."returnselfdef__exit__(self,exc_type,exc_val,exc_tb):torch._C._verbose.mkldnn_set_verbose(VERBOSE_OFF)returnFalse
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.