To build agents that can reason and interact with their environment, we need a structured way to connect the LLM's "thinking" process to external actions. The ReAct pattern, short for Reasoning and Acting, provides an effective framework for this. It formalizes the cycle of thought, action, and observation that allows an agent to break down a problem, execute tasks, and use the results to inform its next steps.
The core idea is to prompt an LLM to generate not just an action, but also the reasoning behind it. This process unfolds in a loop: the LLM first generates a Thought about how to approach the goal, then an Action to take (like using a tool), and finally waits for an Observation (the result of that action). This observation is fed back into the prompt for the next cycle, allowing the agent to course-correct, gather more information, or decide it has enough information to provide a final answer.
This cycle can be visualized as a continuous loop that repeats until the agent achieves its objective:
Goal→Thought→Action→Observation→…The agent module provides a ReActAgent class that orchestrates this entire process. To use it, we need two main components: an LLM to act as the reasoning engine and a set of tools that define the actions the agent can perform.
Let's break down each step of the ReAct loop and how it contributes to the agent's behavior. An LLM used within a ReAct agent is prompted to produce a specific text format that separates its internal monologue from its chosen action.
Thought: I need to calculate the area of a circle with a radius of 5. I should use the calculator tool.Action: calculator and Action Input: 3.14 * 5**2.Observation: Result: 78.5.Thought: I now have the calculated area. I can provide the final answer., which leads to a special "Final Answer" action that concludes the loop.This structure makes the agent's reasoning process transparent and easy to debug, as you can inspect the sequence of thoughts that led to its final output.
The ReActAgent class from kerb.agent.patterns is a ready-to-use implementation of this pattern. It requires a function to call the LLM and a list of tools the agent can use.
For demonstration, we'll start with a mock LLM function that simulates the structured output a real LLM would produce. This helps illustrate the agent's mechanics without needing to configure a live API connection.
from kerb.agent.patterns import ReActAgent
from kerb.agent import Tool
def mock_llm_react(prompt: str) -> str:
"""A mock LLM that responds in the ReAct format."""
# This simulates the LLM's reasoning process for a calculation.
if "calculate" in prompt.lower() or "15 * 7" in prompt.lower():
return """Thought: The user wants to multiply 15 by 7. I should use the calculator tool for this.
Action: calculate
Action Input: 15 * 7"""
# This simulates the LLM's response after receiving the calculation result.
elif "105" in prompt:
return """Thought: I have the result of the calculation. I can now provide the final answer to the user.
Final Answer: The result of 15 * 7 is 105."""
else:
return "Thought: I am not sure what to do.\nFinal Answer: I cannot solve this problem."
def calculate(expression: str) -> str:
"""A simple calculator tool that evaluates a mathematical expression."""
try:
# We use a safe eval for this simple math example.
result = eval(expression)
return f"Result: {result}"
except Exception as e:
return f"Error: {str(e)}"
# Define the tool for the agent
calc_tool = Tool(
name="calculate",
description="A calculator for mathematical expressions. Use this for any math-related questions.",
func=calculate,
parameters={"expression": {"type": "string", "description": "A valid mathematical expression"}}
)
# Initialize the agent
agent = ReActAgent(
name="MathAgent",
llm_func=mock_llm_react,
tools=[calc_tool],
max_iterations=5,
verbose=True # Set to True to see the loop in action
)
In this setup, the Tool object is significant. The description field is not just for documentation; it's what the LLM uses to determine which tool is appropriate for a given task. A clear and descriptive description is important for the agent to function correctly.
With the agent configured, we can give it a goal using the run method. The agent will then execute the ReAct loop until it reaches a final answer or hits its iteration limit.
# The goal for our agent
goal = "What is 15 * 7?"
result = agent.run(goal)
# Inspect the final output and the steps taken
print("\nFINAL RESULT:")
print(result.output)
print("\nEXECUTION TRACE:")
for i, step in enumerate(result.steps, 1):
print(f"\n--- Step {i} ---")
if step.thought:
print(f"Thought: {step.thought}")
if step.action:
print(f"Action: {step.action}({step.action_input})")
if step.observation:
print(f"Observation: {step.observation}")
Running this code would produce an execution trace that clearly shows the agent's step-by-step process.
First Loop Iteration:
llm_func is called with a prompt containing this goal.Thought: ... Action: calculate ... Action Input: 15 * 7.ReActAgent parses this, identifies the calculate tool, and executes it with "15 * 7".calculate function returns "Result: 105". This becomes the observation.Second Loop Iteration:
llm_func is called again, but this time the prompt includes the previous thought, action, and the new observation: "Observation: Result: 105".Thought: ... Final Answer: The result of 15 * 7 is 105..Final Answer keyword, stops the loop, and sets the final output.The diagram below shows the flow of information in a ReAct agent. The agent's LLM core continuously cycles through reasoning and acting until it reaches a conclusion.
The ReAct pattern combines the LLM's reasoning capabilities with the practical execution abilities of external tools.
By structuring agent behavior in this manner, ReAct creates a system that is both more capable and more interpretable. You can trace its "chain of thought" to understand how it arrived at a conclusion, making it an excellent pattern for building reliable and auditable autonomous agents.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with