Taxonomy of Parameter-Efficient Fine-Tuning Methods
Was this section helpful?
LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021arXiv preprint arXiv:2106.09685 (arXiv)DOI: 10.48550/arXiv.2106.09685 - Introduces Low-Rank Adaptation (LoRA), a prominent reparameterization method for efficient fine-tuning.
Parameter-Efficient Transfer Learning for NLP, Neil Houlsby, Andrei Giurgiu, Stanislaw Swietojanski, Maciej G. Juszczak, Patrick H. Chen, Alireza Razavi, Gareth Griffiths, Anna W. Felbo, Hubert Simon, Marcin Mucha, Piotr Clark, Sebastian Hofmann, 2019Proceedings of the 36th International Conference on Machine Learning (ICML), Vol. 97 (PMLR)DOI: 10.5555/3305380.3305459 - Presents adapter tuning, a foundational additive method for parameter-efficient transfer learning.
Prefix-Tuning: Optimizing Continuous Prompts for Generation, Xiang Lisa Li, Percy Liang, 2021Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Vol. Volume 1: Long Papers (Association for Computational Linguistics)DOI: 10.18653/v1/2021.acl-long.353 - Introduces Prefix-Tuning, an additive method that prepends trainable continuous vectors to each attention layer.