Shortcuts

create_optim_in_bwd_wrapper

torchtune.training.create_optim_in_bwd_wrapper(model: Module, optim_dict: Dict[Parameter, Optimizer]) OptimizerInBackwardWrapper[source]

Create a wrapper for optimizer step running in backward.

Parameters:
  • model (torch.nn.Module) – Model that contains parameters that are being optimized. For now, it is assumed that all parameters being optimized belong to a single top-level model. named_parameters attribute of model will be accessed to look up parameter names for parameters being optimized.

  • optim_dict (Dict[torch.nn.Parameter, torch.optim.Optimizer]) – Mapping from parameters to optimizers.

Returns:

Wrapper for optimizer states running in backward.

Return type:

OptimizerInBackwardWrapper

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources