ReAct: Synergizing Reasoning and Acting in Language Models, Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao, 2023International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2210.03629 - Introduces the ReAct framework, which combines reasoning and acting steps in LLMs, essential for autonomous goal pursuit and interaction with environments.
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, 2022Advances in Neural Information Processing Systems, Vol. 35 (Curran Associates, Inc.)DOI: 10.48550/arXiv.2201.11903 - Introduces Chain-of-Thought prompting, a technique that improves LLM reasoning capabilities by prompting models to generate intermediate reasoning steps.