Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe, 2022Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.2203.02155 - This seminal paper introduces the InstructGPT model, detailing the supervised fine-tuning (SFT) phase as the initial step in aligning large language models with human preferences.
Language Models are Few-Shot Learners, Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei, 2020arXivDOI: 10.48550/arXiv.2005.14165 - This foundational paper introduces GPT-3, detailing the pre-training methodology of large language models for next-token prediction, which serves as the basis for subsequent fine-tuning techniques.
Speech and Language Processing, Daniel Jurafsky, James H. Martin, 2025 - This comprehensive textbook offers foundational knowledge on natural language processing, including explanations of supervised learning techniques relevant to language model fine-tuning. Refers to the publicly available draft of the 4th edition.