Building an autonomous agent demonstrates the ReAct pattern. The ReActAgent class provides a ready-to-use implementation of the thought-action-observation loop for creating such agents.
The ReActAgent orchestrates the entire process. At its core, it requires a reasoning engine, an LLM, to generate thoughts and decide on actions. You provide this engine as a callable function. Let's look at the main components you'll use to construct an agent.
To start, you will need to import the ReActAgent class from the patterns submodule.
from kerb.agent.patterns import ReActAgent
When you create an instance of ReActAgent, you configure its behavior through several parameters:
name: A string to identify your agent, which is useful for logging and debugging.llm_func: A Python function or any callable that takes a string prompt and returns the LLM's string response. This function acts as the agent's "brain," generating the thoughts and actions.tools: A list of Tool objects that the agent can use to perform actions. We will touch on this briefly here and explore it fully in the next section.max_iterations: An important safety measure that sets the maximum number of thought-action-observation cycles the agent can perform before stopping. This prevents agents from getting stuck in infinite loops and consuming excessive resources.Let's begin with a simple agent that doesn't use any external tools. Its only "action" is to provide a final answer. This helps illustrate the core reasoning loop.
First, we need to define the llm_func that will serve as the agent's reasoning engine. For this example, we'll use a mock function that simulates an LLM's behavior. In a real application, this function would make an API call to a provider like OpenAI or Anthropic.
The LLM's output must follow a specific format that the ReActAgent can parse. The two main components are Thought and Final Answer.
Thought: This is where the agent verbalizes its reasoning process. It should describe the agent's plan or thinking.Final Answer: This indicates that the agent has finished its task and is providing the final output.Here is a mock LLM function demonstrating this structure.
def simple_llm(prompt: str) -> str:
"""A mock LLM that provides a final answer."""
if "weather" in prompt.lower():
return "Thought: I need to provide weather information.\nFinal Answer: The weather is sunny."
else:
return "Thought: I will process this request.\nFinal Answer: I have processed your request."
With the reasoning engine defined, we can now instantiate and run the agent. We will create an agent named "BasicAgent", pass our simple_llm function, and set a limit of 5 iterations.
#
from kerb.agent.patterns import ReActAgent
from kerb.agent import AgentResult
# The simple_llm function from above
def simple_llm(prompt: str) -> str:
if "weather" in prompt.lower():
return "Thought: I need to provide weather information.\nFinal Answer: The weather is sunny."
else:
return "Thought: I will process this request.\nFinal Answer: I have processed your request."
# Create a ReAct agent instance
agent = ReActAgent(
name="BasicAgent",
llm_func=simple_llm,
max_iterations=5
)
# Define a goal and run the agent
goal = "What is the weather like today?"
result = agent.run(goal)
# Display the final output
print(f"Goal: {goal}")
print(f"Final Answer: {result.output}")
The agent.run() method executes the agent's loop until it produces a Final Answer or hits the max_iterations limit. The method returns an AgentResult object, which contains the final output, execution status, and a detailed list of all the steps taken.
The final output is useful, but the true power of an agent lies in its ability to reason step-by-step. You can inspect this process by examining the steps attribute of the AgentResult. Each step in the list represents one cycle of the thought-action-observation loop.
Let's expand our previous example to print out the agent's thought process.
# ... (agent setup and run from previous example) ...
print("\nEXECUTION STEPS:")
for i, step in enumerate(result.steps, 1):
print(f"\nStep {i}:")
if step.thought:
print(f" Thought: {step.thought}")
if step.action:
print(f" Action: {step.action}")
if step.observation:
print(f" Observation: {step.observation}")
For our simple agent, the loop runs only once. The LLM generates a thought and immediately provides a final answer, so the action and observation fields will be empty. This demonstrates the simplest form of agentic behavior: think, then conclude.
Agents become significantly more powerful when they can interact with the outside environment through tools. A tool is simply a function that the agent can decide to call to gather information or perform an action.
To make an agent use tools, two things are needed:
Tool objects must be passed to the agent during initialization.llm_func must be prompted to produce output in the Thought, Action, and Action Input format when it decides a tool is necessary.Action: The name of the tool to use (e.g., calculate).Action Input: The argument to pass to the tool's function (e.g., 15 * 7).Let's create a new mock LLM that simulates deciding to use a tool.
def mock_llm_react(prompt: str) -> str:
"""Mock LLM that responds in ReAct format with actions."""
# First turn: LLM decides to use the 'calculate' tool
if "calculate" in prompt.lower() or "15 * 7" in prompt.lower():
return """Thought: I need to calculate 15 multiplied by 7. I will use the calculator tool.
Action: calculate
Action Input: 15 * 7"""
# Second turn: After getting the tool's result (observation)
elif "105" in prompt:
return """Thought: I have the result of the calculation. Now I can provide the final answer.
Final Answer: The result of 15 * 7 is 105."""
else:
return "Thought: I am unsure how to proceed.\nFinal Answer: I cannot solve this problem."
Next, we define the tool itself. We will create a simple calculator function and wrap it in a Tool object. The name of the tool must exactly match the name used by the LLM in the Action field. We will cover tool creation in more detail in the next section.
from kerb.agent import Tool
def calculate(expression: str) -> str:
"""A simple calculator tool."""
try:
# Avertissement: eval() can be insecure. Use safely in production.
return f"Result: {eval(expression)}"
except Exception as e:
return f"Error: {str(e)}"
calc_tool = Tool(
name="calculate",
description="Performs mathematical calculations",
func=calculate
)
Now, we instantiate the agent with the tool and run it.
# Create a ReAct agent with the calculator tool
agent_with_tool = ReActAgent(
name="MathAgent",
llm_func=mock_llm_react,
tools=[calc_tool],
max_iterations=5
)
# Run the agent with a goal that requires the tool
goal = "What is 15 * 7?"
result = agent_with_tool.run(goal)
# Display the full thought-action-observation loop
print("REACT LOOP:")
for i, step in enumerate(result.steps, 1):
print(f"\n[Step {i}]")
if step.thought:
print(f" Thought: {step.thought}")
if step.action:
# The agent framework automatically calls the tool
print(f" Action: {step.action}({step.action_input})")
if step.observation:
# The observation is the output from the tool
print(f" Observation: {step.observation}")
print("\n--------------------")
print(f"Final Answer: {result.output}")
When you run this code, you will see a multi-step process:
Thought and an Action (calculate). The agent framework finds the calculate tool, executes it with the Action Input (15 * 7), and captures its output (Result: 105). This output becomes the Observation.Observation from the first cycle and adds it to the history. It then calls the LLM again with the updated context. This time, the LLM sees the result and generates a Thought and a Final Answer.Final Answer and stops the loop, returning the final result.This example illustrates the complete ReAct cycle. By providing an agent with a reasoning engine and a set of tools, you can build systems capable of solving multi-step problems autonomously. The next section will cover how to properly define and manage a variety of tools to expand your agent's capabilities.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with