ReAct: Synergizing Reasoning and Acting in Language Models, Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao, 2023International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2210.03629 - The original research paper that introduced the ReAct framework, detailing its conceptual design and practical application in combining reasoning and action for language model agents.
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou, 2022arXiv preprint arXiv:2201.11903DOI: 10.48550/arXiv.2201.11903 - This foundational paper introduces Chain-of-Thought prompting, a technique that guides LLMs to produce a series of intermediate reasoning steps. This technique is a direct precursor and a core element of the 'Thought' component in the ReAct framework.