Shortcuts

Module Tools

class torcheval.tools.ModuleSummary

Summary of module and its submodules. It collects the following information:

  • Name

  • Type

  • Number of parameters

  • Number of trainable parameters

  • Estimated size in bytes

  • Whether this module contains uninitialized parameters

  • FLOPs for forward (“?” meaning not calculated)

  • FLOPs for backward (“?” meaning not calculated)

  • Input shape (“?” meaning not calculated)

  • Output shape (“?” meaning not calculated)

  • Forward elapsed time in ms (“?” meaning not calculated)

property flops_backward: int | Literal['?']

Returns the total FLOPs for backward calculation using this module.

property flops_forward: int | Literal['?']

Returns the total FLOPs for forward calculation using this module.

property forward_elapsed_time_ms: Literal['?'] | float

Returns the forward time of the module in ms.

property has_uninitialized_param: bool

Returns if a parameter in this module is uninitialized

property in_size: Literal['?'] | List[int]

Returns the input size of the module

property module_name: str

Returns the name of this module

property module_type: str

Returns the type of this module.

property num_parameters: int

Returns the total number of parameters in this module.

property num_trainable_parameters: int

Returns the total number of trainable parameters (requires_grad=True) in this module.

property out_size: Literal['?'] | List[int]

Returns the output size of the module

property size_bytes: int

Returns the total estimated size in bytes of a module.

property submodule_summaries: Dict[str, ModuleSummary]

A Dict with the names of submodules as keys and corresponding ModuleSummary objects as values. These can be traversed for visualization.

torcheval.tools.get_module_summary(module: Module, module_args: Tuple[Any, ...] | None = None, module_kwargs: MutableMapping[str, Any] | None = None) ModuleSummary

Generate a ModuleSummary object, then assign its values and generate submodule tree.

Parameters:
  • module – The module to be summarized.

  • module_args – A tuple of arguments for the module to run and calculate FLOPs and activation sizes.

  • module_kwargs

    Any kwarg arguments to be passed into the module’s forward function.

    Note

    To calculate FLOPs, you must use PyTorch 1.13 or greater.

    Note

    If module contains any lazy submodule, we will NOT calculate FLOPs.

    Note

    Currently only modules that output a single tensor are supported. TODO: to support more flexible output for module.

torcheval.tools.get_summary_table(module_summary: ModuleSummary, human_readable_nums: bool = True) str

Generates a string summary_table, tabularizing the information in module_summary.

Parameters:
  • module_summary – module_summary to be printed/tabularized

  • human_readable_nums – set to False for exact (e.g. 1234 vs 1.2 K)

torcheval.tools.prune_module_summary(module_summary: ModuleSummary, *, max_depth: int) None

Prune the module summaries that are deeper than max_depth in the module summary tree. The ModuleSummary object is prunned inplace.

Parameters:
  • module_summary – Root module summary to prune.

  • max_depth – The maximum depth of module summaries to keep.

Raises:

ValueError – If max_depth is an int less than 1

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources