AI Glossary
Low-Rank Adaptation
LoRA
Definition
LoRA (Low-Rank Adaptation) fine-tunes LLMs by freezing pre-trained weights and injecting trainable low-rank matrices into weight updates, approximating full fine-tuning with far fewer parameters. It decomposes delta weights as low-rank matrices where rank r is much smaller than dimensions, enabling efficient task adaptation.