Having established the core principles of LLM agent tooling in the previous chapter, we now shift our focus to the "how": the practical construction of these tools using Python. This section will guide you through the two primary ways to structure your custom tools: as straightforward Python functions or as more organized Python classes. Your choice here will depend on the complexity of the task the tool needs to perform and whether it needs to remember information across multiple uses.
Python's readability and extensive libraries make it an excellent choice for developing tools for LLM agents. Whether you're performing a simple calculation, querying a database, or interacting with a web service, Python provides the necessary building blocks.
For many tasks, a simple Python function is all you need to create an effective tool. Functions are ideal for tools that are:
Think of a function-based tool as a specialist that does one thing very well and doesn't need to keep notes between jobs.
A well-structured function tool should include:
Let's look at an example. Suppose we want a tool that can calculate the area of a rectangle.
def calculate_rectangle_area(length: float, width: float) -> float:
"""
Calculates the area of a rectangle.
Args:
length (float): The length of the rectangle. Must be a positive number.
width (float): The width of the rectangle. Must be a positive number.
Returns:
float: The calculated area of the rectangle.
"""
if length <= 0 or width <= 0:
# It's good practice to handle invalid inputs,
# though more robust error handling will be covered later.
raise ValueError("Length and width must be positive numbers.")
return length * width
# Example usage (not part of the tool itself, but for demonstration)
# area = calculate_rectangle_area(10.0, 5.0)
# print(f"The area is: {area}")
In this calculate_rectangle_area
tool, the docstring clearly tells the LLM (and any human developer) its purpose, the expected length
and width
arguments (including their types and a constraint), and what it will return. The type hints (length: float
, width: float
, -> float
) provide additional structural information.
When your tool's requirements grow beyond these limitations, it's time to consider using Python classes.
When you need to build tools that are stateful, group related functionalities, or require more complex setup, Python classes offer a more robust and organized approach. A class can encapsulate both data (state) and the methods (behaviors) that operate on that data.
Think of a class-based tool as a more versatile worker who can remember past interactions, manage their own resources, and offer a suite of related services.
FileManager
tool might have methods to read_file
, write_file
, and list_directory_contents
.__init__
) is the natural place for this.__enter__
and __exit__
for context management, ensuring resources are properly acquired and released.class
keyword.__init__
Method (Constructor): This method is called when an instance of the class is created. Use it for:
self.my_data
).Let's consider a UserProfileTool
that can store and retrieve simple user preferences.
class UserProfileTool:
"""
Manages simple user profile information like name and preferred city.
This tool allows setting and getting these preferences.
"""
def __init__(self):
"""Initializes an empty user profile."""
self._name: str | None = None
self._preferred_city: str | None = None
print("UserProfileTool initialized.") # For demonstration
def set_preference(self, key: str, value: str) -> str:
"""
Sets a user preference.
Currently supports 'name' and 'preferred_city'.
Args:
key (str): The preference key (e.g., 'name', 'preferred_city').
value (str): The value for the preference.
Returns:
str: A confirmation message.
"""
if key == "name":
self._name = value
return f"User name set to '{value}'."
elif key == "preferred_city":
self._preferred_city = value
return f"Preferred city set to '{value}'."
else:
return f"Unknown preference key: '{key}'. Supported keys are 'name', 'preferred_city'."
def get_preference(self, key: str) -> str | None:
"""
Retrieves a user preference.
Args:
key (str): The preference key to retrieve (e.g., 'name', 'preferred_city').
Returns:
str | None: The value of the preference, or None if not set or key is unknown.
"""
if key == "name":
return self._name
elif key == "preferred_city":
return self._preferred_city
else:
# For unknown keys, agent frameworks might prefer an error or a specific message.
# Here we return None, but you might also raise an error or return a message.
print(f"Attempted to get unknown preference key: {key}")
return None
# Example usage:
# profile_tool = UserProfileTool()
# print(profile_tool.set_preference("name", "Alex"))
# print(profile_tool.set_preference("preferred_city", "New York"))
# print(f"User's name: {profile_tool.get_preference('name')}")
# print(f"User's city: {profile_tool.get_preference('preferred_city')}")
# print(f"User's favorite_color: {profile_tool.get_preference('favorite_color')}")
In this UserProfileTool
, the __init__
method initializes _name
and _preferred_city
(internal state prefixed with an underscore by convention to indicate they are for internal use, though still accessible). The set_preference
and get_preference
methods operate on this state. Each method has its own docstring, clearly defining its function for the LLM agent. An LLM would be presented with set_preference
and get_preference
as available actions within the UserProfileTool
.
The decision of whether to implement a tool as a Python function or a class hinges on its requirements, particularly around state and complexity.
A decision guide for choosing between Python functions and classes for tool implementation. If a tool requires state or groups multiple related operations, a class is generally preferred. Otherwise, a simpler function may suffice.
Here's a quick summary:
Regardless of whether you choose a function or a class, two elements are consistently important for making your Python code usable as an LLM tool: docstrings and type hints.
Docstrings: As emphasized earlier, LLM agent frameworks often parse these to understand what your tool does. The LLM itself relies on these descriptions to:
Type Hints: Python's type hints (e.g., name: str
, count: int
, -> list[str]
) serve multiple purposes:
# Good use of type hints and docstrings
def search_knowledge_base(query: str, filters: list[str] | None = None) -> list[dict]:
"""
Searches the knowledge base for articles matching the query.
Can optionally apply filters.
Args:
query (str): The search term or question.
filters (list[str] | None, optional): A list of filter strings
to apply (e.g., ['category:technical', 'tag:python']).
Defaults to None (no filters).
Returns:
list[dict]: A list of search results, where each result is a dictionary
containing 'title', 'summary', and 'url'. Returns an empty
list if no results are found.
"""
# ... implementation details ...
print(f"Searching for: {query} with filters: {filters}") # Placeholder
if query == "python tools":
return [
{"title": "Python Functions as Tools", "summary": "...", "url": "/ch2/functions"},
{"title": "Python Classes for Tools", "summary": "...", "url": "/ch2/classes"}
]
return []
In this example, the combination of query: str
and its docstring explanation gives the LLM precise information. Similarly, -> list[dict]
along with its description in the "Returns" section of the docstring prepares the LLM for the structure of the output.
While we are focusing on writing the Python code for the tools themselves, it's useful to keep in mind how these functions and classes become "active" tools for an LLM. Typically, you'll use an LLM agent framework (such as LangChain, LlamaIndex, or even a custom-built one). These frameworks provide mechanisms to:
We won't go into the specifics of any single framework in this section, but understanding this general workflow helps contextualize why clear function/method signatures, comprehensive docstrings, and accurate type hints are so important. They are the bridge between your Python code and the LLM's decision-making process.
By mastering the implementation of tools as both Python functions and classes, you gain the flexibility to create a wide range of capabilities for your LLM agents. As we proceed, we'll build upon these foundational structures to add more sophisticated features like state management, external service interaction, and robust error handling.
Was this section helpful?
© 2025 ApX Machine Learning