Large Language Models are inherently stateless. Each API call is processed in isolation, with no built-in recollection of previous interactions. This presents a direct challenge for building applications like chatbots or assistants, where maintaining the context of a conversation is necessary for a coherent and useful user experience.
This chapter introduces LangChain's memory components, which are designed to solve this problem. You will learn how to add state to your chains and agents, enabling them to remember and reference past exchanges within a single session.
We will cover several memory management strategies, each suited for different use cases. You will start with simple approaches like ConversationBufferMemory, which retains a complete log of the chat history. For longer conversations where token limits become a factor, we will examine ConversationSummaryMemory, which uses an LLM to condense the history. We will also cover techniques for managing fixed-size context, such as windowed and token-based memory types.
Finally, you will see the practical steps for integrating these memory objects into your applications. The chapter concludes with a hands-on exercise to build a chatbot that can maintain conversational context from one turn to the next.
4.1 The Importance of State in Conversations
4.2 Buffer Memory for Short-Term Recall
4.3 Summarization Memory for Long Conversations
4.4 Using Windowed and Token-Based Memory
4.5 Adding Memory to Chains and Agents
4.6 Practice: Building a Chatbot with Memory
© 2026 ApX Machine LearningEngineered with