LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021arXiv preprint arXiv:2106.09685DOI: 10.48550/arXiv.2106.09685 - This foundational paper introduces LoRA, detailing its theory, mathematical formulation, and experimental validation.
Parameter-Efficient Fine-Tuning (PEFT), Hugging Face, 2024 (Hugging Face) - Provides practical instructions for implementing LoRA and other PEFT methods using the Hugging Face library, including details on hyperparameters.