FullModelMetaCheckpointer¶
- class torchtune.training.FullModelMetaCheckpointer(checkpoint_dir: str, checkpoint_files: List[str], model_type: str, output_dir: str, adapter_checkpoint: Optional[str] = None, recipe_checkpoint: Optional[str] = None, resume_from_checkpoint: bool = False)[source]¶
Checkpointer which reads and writes checkpoints in Meta’s format. Examples include the Llama-2-7b model from the meta-llama repo (https://huggingface.co/meta-llama/Llama-2-7b)
Currently we support reading from a single checkpoint file only. Support for reading from sharded checkpoints is WIP.
- Parameters:
checkpoint_dir (str) – Directory containing the checkpoint files
checkpoint_files (List[str]) – List of checkpoint files to load. Currently this checkpointer only supports loading a single checkpoint file.
model_type (str) – Model type of the model for which the checkpointer is being loaded, e.g. LLAMA3.
output_dir (str) – Directory to save the checkpoint files
adapter_checkpoint (Optional[str]) – Path to the adapter weights. If None, and resume_from_checkpoint=True, then look for adapter_model.pt in output_dir/epoch_{largest_epoch}. Default is None.
recipe_checkpoint (Optional[str]) – Path to the recipe state checkpoint file. If None, and resume_from_checkpoint=True, then look for recipe_state.pt in output_dir/recipe_state. Default is None.
resume_from_checkpoint (bool) – If True, the checkpointer will load the additional checkpoint files to resume training from a previous run. Default is False
- Raises:
ValueError – If
checkpoint_files
is not a list of length 1ValueError – If
resume_from_checkpoint
is True butrecipe_checkpoint
is None
- load_checkpoint() Dict[str, Any] [source]¶
Load Meta checkpoint from file. Currently only loading from a single file is supported.
- save_checkpoint(state_dict: Dict[str, Any], epoch: int, intermediate_checkpoint: bool = False, adapter_only: bool = False) None [source]¶
Save Meta checkpoint to file. If
intermediate_checkpoint
is True, an additional checkpoint filerecipe_state.pt
is created in_output_dir/RECIPE_STATE_DIRNAME
which contains the recipe state.- Parameters:
state_dict (Dict[str, Any]) – Checkpoint state dict to be written out to file
epoch (int) – Epoch number. Used to create the checkpoint file name
intermediate_checkpoint (bool) – If True, an additional checkpoint files for recipe state and (if applicable) adapter weights are created. Default is False
adapter_only (bool) – If True, only save the adapter weights. Default is False
- Raises:
ValueError – if
adapter_only
is True and adapter checkpoint not found in state_dict.