Self-Consistency Improves Chain of Thought Reasoning, Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou, 2023ICLR 2023DOI: 10.48550/arXiv.2203.11171 - Introduces self-consistency, a method for improving LLM reasoning by generating multiple reasoning paths and taking a majority vote. Related to multi-path reasoning.
ReAct: Synergizing Reasoning and Acting in Language Models, Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao, 2022International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.2210.03629 - Describes ReAct, a sequential reasoning and acting framework, which provides a contrast to ToT's tree-based exploration.