ReAct: Synergizing Reasoning and Acting in Language Models, Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao, 2022arXiv preprintDOI: 10.48550/arXiv.2210.03629 - Introduces the ReAct framework, a common pattern for LLM agents combining reasoning (Thought) and acting (Action) steps, which is central to the chapter.
Toolformer: Language Models That Can Use Tools, Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom, 2023arXivDOI: 10.48550/arXiv.2302.04761 - Explores how language models can be trained to use external tools through API calls, directly relevant to equipping agents with capabilities beyond their inherent knowledge.
A Survey on Large Language Model based Autonomous Agents, Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui, 2023arXiv preprintDOI: 10.48550/arXiv.2309.07864 - Provides a comprehensive overview of LLM-based autonomous agents, covering architectures like ReAct and plan-and-execute, components, and collaboration, offering broad context for the chapter.