Shortcuts

torchtnt.utils.timer.Timer

class torchtnt.utils.timer.Timer(*, cuda_sync: Optional[bool] = None, verbose: bool = False)
__init__(*, cuda_sync: Optional[bool] = None, verbose: bool = False) None

A Timer class which implements TimerProtocol and stores timings in a dictionary recorded_durations.

Parameters:
  • cuda_sync – whether to call torch.cuda.synchronize() before and after timing. Defaults to True if CUDA is available.
  • verbose – whether to enable verbose logging.
  • size_bounds – defines the range of samples that should be kept in the timer. The lower bound should be smaller than the upper bound. When the number of samples reaches the upper bound, the oldest (upper-lower) bound samples will be removed. This range is applied per action.

Note

Enabling cuda_sync will incur a performance hit, but will ensure accurate timings on GPUs.

Raises:ValueError – If cuda_sync is set to True but CUDA is not available.

Methods

__init__(*[, cuda_sync, verbose]) A Timer class which implements TimerProtocol and stores timings in a dictionary recorded_durations.
reset() Reset the recorded_durations to an empty list
time(action_name) A context manager for timing a code block, with optional cuda synchronization and verbose timing.

Attributes

recorded_durations

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources