get_lr¶
- torchtune.training.get_lr(optimizer: Union[Optimizer, OptimizerInBackwardWrapper]) float [source]¶
Full_finetune_distributed and full_finetune_single_device assume all optimizers have the same LR, here to validate whether all the LR are the same and return if True.
- Parameters:
optimizer (Union[torch.optim.Optimizer, OptimizerInBackwardWrapper]) – A general optimizer input that could whether be a general optimizer or an optimizer warpper based on optimizer_in_backward.
- Returns:
The learning rate of the input optimizers.
- Return type:
lr (float)
- Raises:
RuntimeError – If the learning rates of the input optimizer are not the same.