On First-Order Meta-Learning Algorithms, Alex Nichol, Joshua Achiam, and John Schulman, 2018arXiv preprint arXiv:1803.02999 (arXiv) - 提出了Reptile,一种一阶元学习算法,相较于MAML提供了计算上更高效的替代方案,使其更适用于大规模应用,直接解决了可扩展性问题。
LoRA: Low-Rank Adaptation of Large Language Models, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen, 2021International Conference on Learning Representations (ICLR 2022) (OpenReview.net)DOI: 10.48550/arXiv.2106.09685 - 介绍了低秩适应(LoRA),一种重要的参数高效微调技术,通过减少可训练参数的数量,对于基础模型上的参数高效元学习至关重要。