create_optim_in_bwd_wrapper¶
- torchtune.training.create_optim_in_bwd_wrapper(model: Module, optim_dict: Dict[Parameter, Optimizer]) OptimizerInBackwardWrapper [source]¶
Create a wrapper for optimizer step running in backward.
- Parameters:
model (torch.nn.Module) – Model that contains parameters that are being optimized. For now, it is assumed that all parameters being optimized belong to a single top-level model.
named_parameters
attribute ofmodel
will be accessed to look up parameter names for parameters being optimized.optim_dict (Dict[torch.nn.Parameter, torch.optim.Optimizer]) – Mapping from parameters to optimizers.
- Returns:
Wrapper for optimizer states running in backward.
- Return type:
OptimizerInBackwardWrapper