The Power of Scale for Parameter-Efficient Prompt Tuning, Brian Lester, Rami Al-Rfou, and Noah Constant, 2021Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) (Association for Computational Linguistics)DOI: 10.18653/v1/2021.emnlp-main.243 - The foundational paper introducing Prompt Tuning, which uses learnable continuous prompt vectors to steer large language models.
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Even with 0.1% Parameters, Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, Jie Tang, 2022Proceedings of the 60th Annual Meeting of the Association of Computational Linguistics, Vol. abs/2110.07602DOI: 10.48550/arXiv.2110.07602 - Presents P-Tuning v2, which applies prompt embeddings to every layer of the Transformer, significantly improving performance for complex NLU tasks.