The Power of Scale for Parameter-Efficient Prompt Tuning, Brian Lester, Rami Al-Rfou, and Noah Constant, 2021Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (Association for Computational Linguistics)DOI: 10.18653/v1/2021.emnlp-main.243 - Introduces and thoroughly evaluates Prompt Tuning as a parameter-efficient fine-tuning method, demonstrating its effectiveness, especially with larger language models.
Prefix-Tuning: Optimizing Continuous Prompts for Generation, Xiang Lisa Li and Percy Liang, 2021Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Vol. 1 (Association for Computational Linguistics)DOI: 10.18653/v1/2021.acl-long.353 - Introduces Prefix Tuning, a related technique that learns continuous task-specific prefixes for all transformer layers, providing context for distinguishing it from Prompt Tuning.
Parameter-Efficient Fine-Tuning (PEFT), Hugging Face, 2024 (Hugging Face) - Official documentation for the Hugging Face PEFT library, offering practical guidance and examples for implementing Prompt Tuning and other PEFT methods.