LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2106.09685 - Introduces the original LoRA method, detailing its architecture and the default parameter initialization strategy that ensures no initial change to the pre-trained model.
LoRA (PEFT Library) Documentation, Hugging Face, 2024 - Provides practical implementation details and default initialization choices for LoRA layers within the widely used Hugging Face PEFT library.