Finetuned Language Models are Zero-Shot Learners, Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le, 2022International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2109.01652 - Presents the foundational work on instruction tuning (FLAN), demonstrating how fine-tuning on a diverse set of instructions significantly enhances generalization to unseen tasks and improves zero-shot performance.
Scaling Instruction-Finetuned Language Models, Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei, 2022arXiv preprint arXiv:2210.11416DOI: 10.48550/arXiv.2210.11416 - Extends the concept of instruction tuning (FLAN-T5), exploring the impact of scaling instruction datasets and model size on performance, highlighting the importance of data quality and diversity.