While a single autonomous agent can accomplish impressive tasks, its capabilities are often limited by its single "thread" of reasoning. For problems that are too large or complex for one agent, you can orchestrate a team of agents that collaborate to achieve a common goal. This approach mirrors how human teams work, with different members specializing in specific roles and communicating to solve a problem.
Multi-agent systems allow you to break down a complex task into smaller, manageable sub-tasks, assigning each to a specialized agent. For instance, instead of a single "research assistant" agent, you might build a team consisting of a Researcher to find information, a Writer to draft content, and an Editor to refine it.
The kerb.agent module provides tools to build and manage these collaborative systems. The central component is the AgentTeam class, which orchestrates the interactions between multiple agents.
An AgentTeam is a collection of individual agent instances. Each agent in the team can have its own distinct prompt, set of tools, and configuration, allowing you to define specialized roles.
Let's start by defining a few specialist agents and grouping them into a team.
from kerb.agent.patterns import ReActAgent as Agent
from kerb.agent.teams import AgentTeam
# Define mock LLM functions for each agent's specialty
def researcher_llm(prompt: str) -> str:
"""LLM for researcher agent."""
return "I found the following information: Python is a popular programming language."
def writer_llm(prompt: str) -> str:
"""LLM for writer agent."""
return "I have written a comprehensive article based on the research findings."
def editor_llm(prompt: str) -> str:
"""LLM for editor agent."""
return "I have reviewed and edited the content. It's now polished and ready."
# Create individual agents with specialized roles
researcher = Agent(name="Researcher", llm_func=researcher_llm)
writer = Agent(name="Writer", llm_func=writer_llm)
editor = Agent(name="Editor", llm_func=editor_llm)
# Create an AgentTeam
writing_team = AgentTeam(agents=[researcher, writer, editor])
print(f"Created team with {len(writing_team.agents)} agents.")
With a team assembled, you can orchestrate their collaboration using different patterns, such as a sequential pipeline or a parallel workflow.
A common multi-agent pattern is a sequential pipeline, where agents work in a predefined order, like an assembly line. The output of one agent becomes the input for the next. This is useful for structured, multi-step tasks like our research-and-write example.
A sequential pipeline where the output of each agent is passed to the next.
The run_sequential method executes the agents in the order they were added to the team. The final output of the entire process is the output from the last agent in the sequence.
goal_sequential = "Create an article about Python"
print(f"Goal: {goal_sequential}")
print("Running agents sequentially (Researcher -> Writer -> Editor)...")
sequential_results = writing_team.run_sequential(goal_sequential)
# Display results from each step
for i, result in enumerate(sequential_results):
agent_name = writing_team.agents[i-1].name
print(f"\n[Step {i}] {agent_name}:")
print(f" Output: {result.output[:100]}...")
if i < len(sequential_results):
print(f" -> Passed to next agent")
This pipeline structure ensures a clear flow of information and is highly effective for processes with well-defined stages.
Another approach is parallel execution, where multiple agents work on the same goal simultaneously. This pattern is useful for gathering diverse perspectives or solutions to a single problem. Each agent tackles the task independently, and their results can be reviewed or aggregated to form a more comprehensive final output.
A parallel workflow where multiple agents tackle the same goal.
The run_parallel method broadcasts the goal to all agents in the team and collects their individual results.
from kerb.agent.teams import aggregate_results
goal_parallel = "Research the benefits of using Python for AI"
print(f"Goal: {goal_parallel}")
parallel_results = writing_team.run_parallel(goal_parallel)
print("\nIndividual Results:")
for i, result in enumerate(parallel_results):
agent_name = writing_team.agents[i-1].name
print(f" - {agent_name}: {result.output[:60]}...")
# Aggregate the individual outputs into a single result
aggregated_result = aggregate_results(parallel_results)
print("\nAggregated Result:")
print(f" Combined Output: {aggregated_result.output[:100]}...")
The aggregate_results function combines the outputs from all agents into a single AgentResult object. This allows you to either present all viewpoints or pass the combined result to another agent for synthesis.
More complex systems often use a hierarchical structure where a "manager" or "coordinator" agent decomposes a large problem and delegates sub-tasks to specialized "worker" agents. The delegate_task function facilitates this direct, agent-to-agent task assignment.
from kerb.agent.teams import delegate_task
# Define another specialized agent
def analyst_llm(prompt: str) -> str:
return "Analysis complete: The data shows a positive trend."
analyst = Agent(name="Analyst", llm_func=analyst_llm)
print(f"\n{researcher.name} delegates a task to {analyst.name}")
delegated_task = "Analyze the research findings for market trends"
delegation_result = delegate_task(
task=delegated_task,
from_agent=researcher,
to_agent=analyst,
context={'source': 'research_data.csv'}
)
print(f"\nDelegation complete:")
print(f" Task: {delegated_task}")
print(f" Result from {analyst.name}: {delegation_result.output}")
Delegation enables dynamic and adaptive workflows where agents can request help from others based on the evolving state of the main task.
For agents to collaborate effectively, they need a way to communicate and share state. The Conversation class acts as a shared message log, allowing agents to see the history of interactions. This shared context is important for making informed decisions.
from kerb.agent.teams import Conversation
# A shared conversation for the team
conversation = Conversation()
# Simulate a conversation between agents
conversation.add_message(researcher.name, "I've completed the research on Python for AI.")
conversation.add_message(writer.name, "Great! I'll use that to write the article.")
conversation.add_message(writer.name, "Article is drafted and ready for review.")
conversation.add_message(editor.name, "I'll review it now and provide feedback.")
print("\nConversation History:")
for msg in conversation.get_history():
print(f" [{msg['timestamp']}] {msg['agent']}: {msg['content']}")
This conversation history can be included in an agent's context, allowing it to understand what has already been discussed and decided.
By combining specialized agents, clear orchestration patterns, and shared communication channels, you can build sophisticated multi-agent systems capable of solving problems far beyond the reach of any single agent.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with