Language Models are Few-Shot Learners, Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei, 2020Advances in Neural Information Processing Systems (NeurIPS), Vol. 33 (MIT Press)DOI: 10.5555/3495724.3495883 - This foundational paper demonstrates how large language models can perform various tasks like Q&A and text generation through few-shot prompting, which underpins common interaction patterns.
Prompt Engineering Guide, Lilian Weng and Contributors, 2023 - This comprehensive online guide provides practical strategies and patterns for effective interaction with large language models, covering various prompting techniques and common use cases.
InCoder: A Generative Model for Code Infilling and Synthesis, Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis, 2023ICLRDOI: 10.48550/arXiv.2204.05999 - This paper introduces a generative model specifically designed for code generation and infilling, demonstrating an important capability of LLMs discussed in the section.