Language Models are Few-Shot Learners, Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei, 2020Advances in Neural Information Processing Systems (NeurIPS), Vol. 33 (NeurIPS)DOI: 10.48550/arXiv.2005.14165 - This foundational paper introduces the concept of few-shot learning, demonstrating how large language models can perform new tasks by seeing only a few examples in the prompt, which is a key method for prompt design.
Pre-train, Prompt, and Predict: A Systematic Survey of Prompt Engineering, Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig, 2023ACM Computing Surveys, Vol. 56 (Association for Computing Machinery (ACM))DOI: 10.1145/3607340 - A systematic survey that organizes and analyzes various prompt engineering techniques, providing a broad understanding of different approaches to effectively guide LLM generation.
A Survey of Prompting Methods in Large Language Models, Bailin Wang, Ruoxi Sun, Shaohan Huang, Furu Wei, Li Dong, Badr Youbi, Heng Ji, 2023arXiv preprint arXiv:2303.12608DOI: 10.48550/arXiv.2303.12608 - This paper offers a review of different prompting methods, discussing their components and strategies for improving the performance of large language models across various tasks.
OpenAI Prompt engineering guide, OpenAI, 2024 (OpenAI) - An official and practical guide from a leading LLM developer, providing best practices and common strategies for designing effective prompts to control LLM outputs.