While standard tools offer a solid foundation for many tasks, the true utility of agents emerges when they can interact with your specific environment. Many applications require agents to connect to proprietary databases, internal services, or specialized third-party APIs. Creating custom tools makes this possible.
A custom tool is essentially a Python function packaged with a clear description that an LLM can understand. The agent's effectiveness depends almost entirely on how well you define these tools.
At its core, a LangChain tool consists of three primary components:
You can construct a tool directly using the Tool class from langchain.tools. Let's create a simple tool that calculates the circumference of a circle given its radius.
import math
from langchain.tools import Tool
def calculate_circumference(radius: str) -> str:
"""Calculates the circumference of a circle given its radius."""
try:
# The agent provides input as a string, so we must cast it.
radius_float = float(radius)
circumference = 2 * math.pi * radius_float
return f"The circumference is {circumference:.2f}."
except ValueError:
return "Invalid input. Please provide a valid number for the radius."
circumference_tool = Tool(
name="CircleCircumferenceCalculator",
description="Use this tool to calculate the circumference of a circle. The input should be the radius of the circle as a number.",
func=calculate_circumference,
)
print(circumference_tool.invoke("10"))
In this example, the description clearly tells the LLM the tool's purpose and its expected input format. The function itself includes basic error handling, a practice you should always follow.
Writing the Tool class definition can become repetitive. LangChain provides a more Pythonic and convenient way to create tools using the @tool decorator. This decorator automatically infers the tool's name from the function name and, importantly, uses the function's docstring as its description.
This approach not only reduces boilerplate code but also encourages good documentation practices. Let's refactor our circumference calculator.
from langchain.tools import tool
@tool
def circle_circumference_calculator(radius: str) -> str:
"""
Calculates the circumference of a circle.
Use this tool when you need to find the circumference of a circle given its radius.
The input to this tool should be a string representing the radius number.
"""
try:
radius_float = float(radius)
circumference = 2 * math.pi * radius_float
return f"The circumference is {circumference:.2f}."
except ValueError:
return "Invalid input. Please provide a valid number for the radius."
# The tool is now a callable object
print(circle_circumference_calculator.name)
print(circle_circumference_calculator.description)
print(circle_circumference_calculator.invoke("10"))
The output shows that the decorator correctly assigned the function name and docstring to the tool's properties. For most custom tools, the decorator is the recommended method.
The following diagram illustrates how an agent uses a tool's description to decide on an action.
The agent's decision-making loop. The LLM reviews the descriptions of available tools to select the appropriate action based on the user's query and its internal reasoning process.
By default, tools created with the @tool decorator or the Tool class accept a single string argument. However, many functions and API calls require multiple, structured arguments. You can define a structured input schema for your tool using Pydantic BaseModel.
This gives the LLM a clear format for the arguments it needs to generate, improving reliability. Let's create a tool for a "User Profile API" that requires a user_id and an optional include_history flag.
from langchain.tools import tool
from pydantic import BaseModel, Field
class UserProfileInput(BaseModel):
"""Input model for the user profile tool."""
user_id: int = Field(description="The unique identifier for the user.")
include_history: bool = Field(default=False, description="Whether to include the user's order history.")
@tool(args_schema=UserProfileInput)
def get_user_profile(user_id: int, include_history: bool = False) -> str:
"""
Fetches a user's profile information.
Use this to get details for a specific user ID.
You can optionally include their order history.
"""
# In a real application, this would query a database or API.
profile = f"Profile for user {user_id}: Name - Alex, Member since 2022."
if include_history:
history = " Order history: [Order #123, Order #456]."
return profile + history
return profile
# The agent now knows how to structure the input
print(get_user_profile.args)
By providing an args_schema, you instruct the agent on exactly how to format its request to the tool. The LLM will now attempt to generate a JSON object matching the UserProfileInput schema when it decides to use this tool.
Let's build a more practical tool that fetches real data from an external API. We will create a tool to get the current weather for a given city using the OpenWeatherMap API.
First, ensure you have an API key from OpenWeatherMap and the requests library is installed (pip install requests).
import os
import requests
from langchain.tools import tool
from pydantic import BaseModel, Field
# Set your API key as an environment variable
# os.environ["OPENWEATHERMAP_API_KEY"] = "your_api_key_here"
class WeatherInput(BaseModel):
"""Input for the GetCurrentWeather tool."""
city: str = Field(description="The city name for which to get the weather.")
@tool(args_schema=WeatherInput)
def get_current_weather(city: str) -> str:
"""
Fetches the current weather for a specified city.
Use this tool to find out the temperature and weather conditions for any city.
"""
api_key = os.getenv("OPENWEATHERMAP_API_KEY")
if not api_key:
return "Error: OPENWEATHERMAP_API_KEY environment variable not set."
base_url = "http://api.openweathermap.org/data/2.5/weather"
params = {"q": city, "appid": api_key, "units": "metric"}
try:
response = requests.get(base_url, params=params)
response.raise_for_status() # Raises an HTTPError for bad responses (4xx or 5xx)
data = response.json()
temperature = data["main"]["temp"]
description = data["weather"][0]["description"]
return f"The current weather in {city} is {temperature}°C with {description}."
except requests.exceptions.HTTPError as http_err:
return f"HTTP error occurred: City not found or API error."
except Exception as e:
return f"An error occurred: {e}"
# Example of running the tool directly
print(get_current_weather.invoke({"city": "London"}))
This tool is reliable: it has a structured input, a clear description, calls an external service, and handles potential errors like a missing API key or an invalid city name. This is the quality of tool you should aim to build for your agents. When integrated into an agent, it can now answer questions like "What's the weather like in Tokyo?" by reasoning that it needs to use the get_current_weather tool with the argument city="Tokyo".
Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
@tool decorator.args_schema) for LangChain tools, ensuring type validation and clarity for LLMs.© 2026 ApX Machine LearningEngineered with