ReAct: Synergizing Reasoning and Acting in Language Models, Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao, 2023International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2210.03629 - Introduces a paradigm where LLMs interleave reasoning (Thought) and action (Act) steps, processing observations, which forms a basis for detecting deviations and reacting with new actions.
Tree of Thoughts: Deliberate Problem Solving with Large Language Models, Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan, 2023Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.2305.10601 - Describes a method for complex problem-solving that enables LLMs to explore multiple reasoning paths, backtrack, and re-evaluate, enhancing their ability to handle planning failures.