LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, 2021International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2106.09685 - 介绍了低秩适应 (LoRA),这是一种参数高效的微调方法,解决了在适应大型基础模型时的计算和内存挑战,为元学习的内循环提供了实用方案。