lora_gemma_7b¶
- torchtune.models.gemma.lora_gemma_7b(lora_attn_modules: List[Literal['q_proj', 'k_proj', 'v_proj', 'output_proj']], apply_lora_to_mlp: bool = False, lora_rank: int = 8, lora_alpha: float = 16, quantize_base: bool = False) GemmaTransformerDecoder [source]¶
Builder for creating a Gemma 7B model with LoRA enabled.
The Gemma defaults are the same as in
gemma_7b()
, while LoRA default params are based on https://github.com/tloen/alpaca-lora/blob/8bb8579e403dc78e37fe81ffbb253c413007323f/finetune.py#L41-L43.- Parameters:
lora_attn_modules (List[LORA_ATTN_MODULES]) – list of which linear layers LoRA should be applied to in each self-attention block. Options are
{"q_proj", "k_proj", "v_proj", "output_proj"}
.apply_lora_to_mlp (bool) – whether to apply LoRA to the MLP in each transformer layer. Default: False
lora_rank (int) – rank of each low-rank approximation
lora_alpha (float) – scaling factor for the low-rank approximation
quantize_base (bool) – Whether to quantize base model weights
- Returns:
Instantiation of Gemma 7B model with LoRA applied
- Return type:
GemmaTransformerDecoder