Finetuned Language Models Are Zero-Shot Learners, Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le, 2022arXiv preprint arXiv:2109.01652DOI: 10.48550/arXiv.2109.01652 - A foundational paper introducing instruction tuning, demonstrating how fine-tuning on a collection of instruction-formatted datasets improves zero-shot generalization and influences data formatting for SFT.
Fine-tuning, OpenAI, 2024 (OpenAI) - Official guide on preparing data for fine-tuning OpenAI models, including requirements for chat-formatted input and example data structures.