LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021arXiv preprint arXiv:2106.09685DOI: 10.48550/arXiv.2106.09685 - Introduces Low-Rank Adaptation (LoRA), a technique for fine-tuning large models by injecting trainable low-rank decomposition matrices into the transformer layers, significantly reducing the number of trainable parameters.