LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2106.09685 - This foundational paper introduces the LoRA method and discusses the rationale behind its low-rank adaptation, which is central to understanding the selection of rank 'r'.
PEFT: Parameter-Efficient Fine-tuning - Documentation, Hugging Face, 2023 - The official documentation for the Hugging Face PEFT library offers practical guidance on configuring LoRA, including discussions on hyperparameter 'r' selection based on community best practices and practical considerations.