LoRALinear¶
- class torchtune.modules.peft.LoRALinear(in_dim: int, out_dim: int, rank: int, alpha: float, dropout: float = 0.0, use_bias: bool = False, quantize_base: bool = False)[source]¶
LoRA linear layer as introduced in LoRA: Low-Rank Adaptation of Large Language Models.
LoRA perturbs a given layer via a low-rank approximation where only the rank decomposition matrices are trainable. In a linear layer instead of \(x \mapsto W_0x\) a LoRALinear layer is defined as \(x \mapsto W_0x + (\alpha / r)BAx\), where \(r\) is the rank of the matrices \(A\) and \(B\) and \(\alpha\) is a scaling factor. As in the original implementation, we support dropout before multiplication by the low-rank matrices.
- Parameters:
in_dim (int) – input dimension
out_dim (int) – output dimension
rank (int) – rank of the low-rank approximation
alpha (float) – scaling factor for the low-rank approximation
dropout (float) – dropout probability. Default: 0.0
use_bias (bool) – whether to include bias in the original linear layer. Default: False
quantize_base (bool) – Whether to quantize base linear weight or not. Default: False
- adapter_params() List[str] [source]¶
Return lora_a.weight and lora_b.weight as adapter params. If bias is enabled, also return lora_a.bias and lora_b.bias.
- forward(x: Tensor) Tensor [source]¶
- Parameters:
x (torch.Tensor) – input tensor with shape
(..., in_dim)
- Returns:
output tensor with shape
(..., out_dim)
- Return type: