ReAct: Synergizing Reasoning and Acting in Language Models, Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao, 2023arXiv preprint arXiv:2210.03629DOI: 10.48550/arXiv.2210.03629 - Introduces the ReAct framework, which intertwines reasoning (Thought) and acting (Action) steps, enabling LLMs to perform dynamic reasoning to create, maintain, and adjust action plans, and interact with external environments.
Tree of Thoughts: Deliberate Problem Solving with Large Language Models, Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan, 2023Advances in Neural Information Processing Systems (NeurIPS) 36DOI: 10.48550/arXiv.2305.10601 - Proposes the Tree of Thoughts framework, which generalizes Chain-of-Thought prompting by allowing LLMs to explore multiple reasoning paths and evaluate intermediate thoughts, leading to more robust problem-solving in complex tasks.
Graph of Thoughts: Towards Large Language Model Based Cognitive Architectures, Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler, 2024Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38 (Association for the Advancement of Artificial Intelligence (AAAI))DOI: 10.1609/aaai.v38i16.29720 - Extends the Tree of Thoughts concept to a more general graph structure, enabling LLMs to manage and manipulate complex thought processes more flexibly, supporting advanced reasoning and planning.
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela, 2020Advances in Neural Information Processing Systems (NeurIPS) 33, Vol. 33DOI: 10.48550/arXiv.2005.11401 - Introduces Retrieval-Augmented Generation (RAG), a fundamental method for enhancing LLM factual accuracy and reducing hallucinations by conditioning generation on retrieved documents, highly relevant to the 'Self-Ask' approach's goal of factual grounding.