LLMs are frequently used for direct, single-turn tasks, such as generating text or answering questions based on provided context. Building systems that can operate with more autonomy to achieve a goal extends these capabilities. An autonomous agent uses an LLM as a reasoning engine to determine a sequence of steps, often interacting with external tools to gather information or perform actions. This process can be represented as a loop of thought, action, and observation.
In this chapter, you will learn to build these agents with the agent module. We will start by covering the ReAct (Reasoning and Acting) pattern, a common framework for structuring agent behavior. You will then implement a ReAct agent and learn how to provide it with tools, such as a search function or a simple calculator, to extend its capabilities. The core loop can be simplified to the following sequence:
Next, we will cover an alternative architecture known as a plan-and-execute agent. Finally, we will discuss the principles for building systems where multiple agents can collaborate to solve more complex problems. By the end of this chapter, you will be able to construct agents that can break down a problem, use tools to find solutions, and act on your behalf.
At its core, an autonomous agent is a system designed to achieve a goal by repeatedly making decisions and taking actions. While the LLM acts as the "brain," the agent is the complete framework that enables this brain to perceive, reason, and act within an environment. This framework is built on a few essential components:
The diagram below illustrates this fundamental loop. The agent continues this cycle until it determines it has enough information to provide a final answer.
The agent loop continues until the LLM determines it has successfully achieved the goal and can provide a final answer.
The agent module provides the necessary components to construct these autonomous systems. The primary building block is the Agent class, which orchestrates the reasoning loop. A common and effective implementation of this is the ReActAgent, which we will use to get started.
Let's create a simple agent. For this first example, we will use a mock LLM function to simulate the reasoning engine's output. In a real application, this would be a call to an LLM provider like OpenAI or Anthropic.
from kerb.agent.patterns import ReActAgent
def simple_llm(prompt: str) -> str:
"""A mock LLM function for demonstration."""
# In a real application, this would call an LLM API.
# This mock provides a ReAct-style response.
if "weather" in prompt.lower():
return "Thought: I need to provide weather information.\nFinal Answer: The weather is sunny with a temperature of 72°F."
else:
return f"Thought: I'll process this request.\nFinal Answer: I have processed your request."
# Create a ReAct agent, a concrete implementation of the base Agent
agent = ReActAgent(
name="BasicAgent",
llm_func=simple_llm,
max_iterations=5
)
# Run the agent with a goal
goal = "What is the weather like today?"
result = agent.run(goal)
# The result object contains the final output and execution details
print(f"Status: {result.status.value}")
print(f"Output: {result.output}")
# You can also inspect the steps the agent took
if result.steps:
print("\nExecution Steps:")
for i, step in enumerate(result.steps, 1):
if step.thought:
print(f" Step {i} Thought: {step.thought}")
In this example, we instantiate a ReActAgent and provide our simple_llm as the reasoning engine. When we call agent.run(goal), the agent begins its loop. It passes the goal to the llm_func, which returns a string containing a thought and a final answer. The agent framework parses this output and, upon seeing "Final Answer," concludes its run. The AgentResult object gives us access to both the final output and a list of steps taken during the reasoning process.
An agent's true power comes from its ability to use tools. Without them, an LLM is confined to its pre-trained knowledge. Tools allow an agent to access real-time information, perform calculations, interact with databases, or call external APIs.
You can define a tool by wrapping a Python function with the Tool class. The most important parameter is the description, as the LLM uses this text to decide which tool is appropriate for a given task. A good description clearly and concisely explains what the tool does and what its inputs are.
Here is how you can create a simple calculator tool.
from kerb.agent import Tool
def calculate(expression: str) -> str:
"""A calculator tool that evaluates a mathematical expression."""
try:
# Use a safe evaluation for basic math
result = eval(expression, {"__builtins__": {}}, {})
return f"Result: {result}"
except Exception as e:
return f"Error: {str(e)}"
# Create a tool from the function
calc_tool = Tool(
name="calculate",
description="Performs mathematical calculations, such as addition, subtraction, multiplication, and division.",
func=calculate,
parameters={
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate (e.g., '15 * 7')."
}
}
)
Once defined, you provide a list of tools to the agent during initialization. The agent's underlying prompt is automatically updated to include the names and descriptions of the available tools, instructing the LLM on how to use them.
When the LLM decides to use a tool, it outputs a specific format indicating the tool's name and the arguments to pass. The agent framework intercepts this output, executes the corresponding Python function, and feeds the function's return value back to the LLM as an "observation." This allows the LLM to incorporate the new information into its next thought.
With these fundamentals in place, the reasoning loop, the agent, and its tools, we can now explore specific patterns for orchestrating agent behavior. The next section details one of the most common and effective patterns: ReAct.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with