LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2106.09685 - The original paper introducing LoRA, a core parameter-efficient fine-tuning method.
QLoRA: Efficient Finetuning of Quantized LLMs, Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer, 2023Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.2305.14314 - Introduces QLoRA, which significantly reduces memory needs for fine-tuning large models through quantization.
Parameter-Efficient Fine-Tuning (PEFT), Hugging Face, 2024 - Official documentation for the Hugging Face PEFT library, offering practical guidance and an overview of PEFT methods.