Module Tools


Summary of module and its submodules. It collects the following information:

  • Name

  • Type

  • Number of parameters

  • Number of trainable parameters

  • Estimated size in bytes

  • Whether this module contains uninitialized parameters

  • FLOPs for forward (“?” meaning not calculated)

  • FLOPs for backward (“?” meaning not calculated)

  • Input shape (“?” meaning not calculated)

  • Output shape (“?” meaning not calculated)

  • Forward elapsed time in ms (“?” meaning not calculated)

property flops_backward: int | Literal['?']

Returns the total FLOPs for backward calculation using this module.

property flops_forward: int | Literal['?']

Returns the total FLOPs for forward calculation using this module.

property forward_elapsed_time_ms: Literal['?'] | float

Returns the forward time of the module in ms.

property has_uninitialized_param: bool

Returns if a parameter in this module is uninitialized

property in_size: Literal['?'] | List[int]

Returns the input size of the module

property module_name: str

Returns the name of this module

property module_type: str

Returns the type of this module.

property num_parameters: int

Returns the total number of parameters in this module.

property num_trainable_parameters: int

Returns the total number of trainable parameters (requires_grad=True) in this module.

property out_size: Literal['?'] | List[int]

Returns the output size of the module

property size_bytes: int

Returns the total estimated size in bytes of a module.

property submodule_summaries: Dict[str, ModuleSummary]

A Dict with the names of submodules as keys and corresponding ModuleSummary objects as values. These can be traversed for visualization. Module, module_args: Tuple[Any, ...] | None = None, module_kwargs: MutableMapping[str, Any] | None = None) ModuleSummary

Generate a ModuleSummary object, then assign its values and generate submodule tree.

  • module – The module to be summarized.

  • module_args – A tuple of arguments for the module to run and calculate FLOPs and activation sizes.

  • module_kwargs

    Any kwarg arguments to be passed into the module’s forward function.


    To calculate FLOPs, you must use PyTorch 1.13 or greater.


    If module contains any lazy submodule, we will NOT calculate FLOPs.


    Currently only modules that output a single tensor are supported. TODO: to support more flexible output for module. ModuleSummary, human_readable_nums: bool = True) str

Generates a string summary_table, tabularizing the information in module_summary.

  • module_summary – module_summary to be printed/tabularized

  • human_readable_nums – set to False for exact (e.g. 1234 vs 1.2 K) ModuleSummary, *, max_depth: int) None

Prune the module summaries that are deeper than max_depth in the module summary tree. The ModuleSummary object is prunned inplace.

  • module_summary – Root module summary to prune.

  • max_depth – The maximum depth of module summaries to keep.


ValueError – If max_depth is an int less than 1


Access comprehensive developer documentation for PyTorch

View Docs


Get in-depth tutorials for beginners and advanced developers

View Tutorials


Find development resources and get your questions answered

View Resources