Techniques for Multi-Adapter / Multi-Task Training
Was this section helpful?
LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2106.09685 - Introduces Low-Rank Adaptation (LoRA), a foundational parameter-efficient fine-tuning method that forms the basis for multi-adapter strategies.
PEFT (Parameter-Efficient Fine-tuning) Library Documentation, Hugging Face, 2024 - Official documentation for the Hugging Face PEFT library, providing practical guidance on implementing and managing adapters, including functionalities for handling multiple adapters.