Language Models are Few-Shot Learners, Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei, 2020arXiv preprint arXiv:2005.14165DOI: 10.48550/arXiv.2005.14165 - Introduces the concept of in-context learning in large language models (LLMs), demonstrating how models like GPT-3 can perform new tasks given a few examples without parameter updates, forming the basis of few-shot prompting.
Prompt Engineering Guide, Learn Prompting, 2023 - A comprehensive guide to prompt engineering techniques, with a dedicated section explaining few-shot prompting, its benefits, and practical applications, often citing relevant research papers.
Few-shot prompt templates, LangChain documentation contributors, 2024 - Official documentation for LangChain's FewShotPromptTemplate, providing details on its usage, parameters, and examples for programmatically constructing few-shot prompts in Python.
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?, Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer, 2022EMNLP 2022 (long)DOI: 10.48550/arXiv.2202.12837 - Investigates the underlying reasons for the effectiveness of in-context learning (few-shot prompting), shedding light on the importance of input-output format and label space over surface form or factual correctness of demonstrations.