LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021arXiv preprint arXiv:2106.09685DOI: 10.48550/arXiv.2106.09685 - The foundational paper introducing LoRA, detailing its mathematical formulation, empirical effectiveness, and benefits for parameter-efficient fine-tuning of large language models.
PEFT: Parameter-Efficient Fine-tuning of Foundation Models, Hugging Face, 2023 - Official documentation for the Hugging Face PEFT library, offering practical guidance and code examples for implementing LoRA and other parameter-efficient fine-tuning techniques.