Shortcuts

get_lr

torchtune.training.get_lr(optimizer: Union[Optimizer, OptimizerInBackwardWrapper]) float[source]

Full_finetune_distributed and full_finetune_single_device assume all optimizers have the same LR, here to validate whether all the LR are the same and return if True.

Parameters:

optimizer (Union[torch.optim.Optimizer, OptimizerInBackwardWrapper]) – A general optimizer input that could whether be a general optimizer or an optimizer warpper based on optimizer_in_backward.

Returns:

The learning rate of the input optimizers.

Return type:

lr (float)

Raises:

RuntimeError – If the learning rates of the input optimizer are not the same.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources