Building a LangChain agent involves equipping it with custom tools that interact with external data sources or perform specific calculations. This approach simulates common production scenarios where an agent requires specialized capabilities beyond the LLM's inherent knowledge.
Our goal is to create an agent that can answer questions about current weather and estimate driving times between locations. This requires giving the agent access to two distinct functionalities via custom tools.
Imagine you need an assistant that can answer queries like:
To handle these requests, the agent needs:
We will implement these as custom LangChain tools and integrate them into a Tool Calling agent.
First, let's create a tool to get weather data. For a real application, you'd likely use a service like OpenWeatherMap, WeatherAPI, or a similar provider. This usually involves obtaining an API key. For simplicity in this practice section, we'll define a function that returns mock weather data. However, we'll structure it as if it were calling a real API.
# prerequisites: Ensure you have langchain, langchain-openai, and langchain-core installed
# pip install langchain langchain-openai langchain-core python-dotenv
import os
import random
from dotenv import load_dotenv
from langchain_core.tools import BaseTool, Tool
from typing import Type, Optional
from pydantic import BaseModel, Field
# Load environment variables (optional, for API keys if using real APIs)
load_dotenv()
# --- Weather Tool Implementation ---
class WeatherInput(BaseModel):
"""Input schema for the Weather Tool."""
location: str = Field(description="The city name for which to get the weather.")
def get_current_weather(location: str) -> str:
"""
Simulates fetching current weather for a location.
In a real application, this would call an external weather API.
"""
print(f"---> Calling Weather Tool for: {location}")
# Simulate API call
try:
# Mock data generation
temp_celsius = random.uniform(5.0, 35.0)
conditions = random.choice(["Sunny", "Cloudy", "Rainy", "Windy", "Snowy (unlikely!)"])
humidity = random.randint(30, 90)
return f"The current weather in {location} is {temp_celsius:.1f}°C, {conditions}, with {humidity}% humidity."
except Exception as e:
return f"Error fetching weather for {location}: {e}"
# Option 1: Using the Tool decorator (simpler for basic functions)
# from langchain_core.tools import tool
# @tool("weather_checker", args_schema=WeatherInput)
# def weather_tool(location: str) -> str:
# """Useful for finding the current weather conditions in a specific city."""
# return get_current_weather(location)
# Option 2: Subclassing BaseTool (more control, better for complex logic/state)
class WeatherTool(BaseTool):
name: str = "Weather Checker"
description: str = "Useful for finding the current weather conditions in a specific city. Input should be a city name."
args_schema: Type[BaseModel] = WeatherInput
def _run(self, location: str) -> str:
"""Use the tool."""
return get_current_weather(location)
async def _arun(self, location: str) -> str:
"""Use the tool asynchronously."""
# For this simple mock function, async isn't strictly necessary,
# but it demonstrates the pattern for real async API calls.
# In a real scenario, you'd use an async HTTP client (e.g., aiohttp).
return self._run(location) # Simulate async call
weather_tool = WeatherTool()
# Test the tool directly (optional)
# print(weather_tool.invoke({"location": "London"}))
# print(weather_tool.invoke("Paris")) # Also accepts direct string input if args_schema allows
Important points about this tool:
WeatherInput Schema: We define a Pydantic model WeatherInput to specify the expected input (location). This helps LangChain validate inputs and provides structure for the LLM tool calling API.get_current_weather Function: This is the core logic. It currently uses random data but mimics the structure of an API call handler, including basic error handling. The print statement helps trace tool execution.WeatherTool Class: We subclass BaseTool from langchain_core for explicit control.
name: A concise identifier for the tool.description: Important for the agent. The LLM uses this description to decide when to use the tool and what input to provide. Make it clear and informative.args_schema: Links to our Pydantic input model._run: The synchronous execution method._arun: The asynchronous execution method.Next, we need a tool to estimate driving times. Again, implementations might use APIs like Google Maps Distance Matrix or OSRM. We'll simulate this with a simple calculation.
# --- Driving Time Tool Implementation ---
class DrivingTimeInput(BaseModel):
"""Input schema for the Driving Time Tool."""
origin: str = Field(description="The starting city or location.")
destination: str = Field(description="The destination city or location.")
def estimate_driving_time(origin: str, destination: str) -> str:
"""
Simulates estimating driving time between two locations.
Assumes a fixed average speed for simplicity.
"""
print(f"---> Calling Driving Time Tool for: {origin} to {destination}")
# Very simplified distance simulation based on city name lengths
# (Replace with a real distance calculation or API call in production)
simulated_distance_km = abs(len(origin) - len(destination)) * 50 + random.randint(50, 500)
average_speed_kph = 80
if simulated_distance_km == 0: # Avoid division by zero for same origin/destination
return f"Origin and destination ({origin}) are the same."
time_hours = simulated_distance_km / average_speed_kph
hours = int(time_hours)
minutes = int((time_hours - hours) * 60)
return f"The estimated driving time from {origin} to {destination} is approximately {hours} hours and {minutes} minutes ({simulated_distance_km} km)."
class DrivingTimeTool(BaseTool):
name: str = "Driving Time Estimator"
description: str = ("Useful for estimating the driving time between two cities. "
"Input should be the origin city and the destination city.")
args_schema: Type[BaseModel] = DrivingTimeInput
def _run(self, origin: str, destination: str) -> str:
"""Use the tool."""
return estimate_driving_time(origin, destination)
async def _arun(self, origin: str, destination: str) -> str:
"""Use the tool asynchronously."""
# Simulate async call for demonstration
return self._run(origin, destination)
driving_tool = DrivingTimeTool()
# Test the tool directly (optional)
# print(driving_tool.invoke({"origin": "Paris", "destination": "Berlin"}))
This tool follows the same pattern as the weather tool: an input schema (DrivingTimeInput), a core logic function (estimate_driving_time), and a BaseTool subclass (DrivingTimeTool).
Now that we have our custom tools, let's integrate them into an agent. We will use a Tool Calling Agent. This is the modern standard for models like GPT-3.5 and GPT-4, utilizing the model's native API for function calling rather than relying on brittle text parsing (like the older ReAct pattern).
# --- Agent Setup ---
from langchain_openai import ChatOpenAI
from langchain import hub
from langchain.agents import create_tool_calling_agent, AgentExecutor
# Ensure you have OPENAI_API_KEY set in your environment or .env file
# os.environ["OPENAI_API_KEY"] = "your_api_key"
if not os.getenv("OPENAI_API_KEY"):
print("Warning: OPENAI_API_KEY not set. Agent execution will likely fail.")
# 1. Initialize the LLM
# Tool calling requires a chat model that supports the feature
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
# 2. Define the list of tools
tools = [weather_tool, driving_tool]
# 3. Get the prompt template
# Pulls a predefined prompt optimized for tool calling agents
# You can explore other prompts on the LangChain Hub
prompt = hub.pull("hwchase17/openai-tools-agent")
# 4. Create the Tool Calling Agent
# This binds the LLM, tools, and prompt together, leveraging the OpenAI Tools API
agent = create_tool_calling_agent(llm, tools, prompt)
# 5. Create the Agent Executor
# This runs the agent loop
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True, # Set to True to see the agent's tool usage
max_iterations=5 # Prevent potential infinite loops
)
print("Agent Executor created successfully.")
Let's break down the agent creation:
ChatOpenAI. Temperature is set to 0 for more deterministic responses suitable for tool use.weather_tool and driving_tool instances into a list.hwchase17/openai-tools-agent from the LangChain Hub. This prompt is specifically designed to handle the system instructions required for tool calling models.create_tool_calling_agent: This function constructs the agent logic. Unlike legacy agents that needed complex text parsing instructions, this agent uses the bind_tools method internally to attach our tool definitions directly to the API call.AgentExecutor: This is the runtime environment for the agent. It manages the loop: sending input to the LLM, executing the tools the LLM requests, and feeding the output back to the LLM.With the agent executor ready, let's test it with different queries.
# --- Running the Agent ---
print("\n--- Running Simple Weather Query ---")
response1 = agent_executor.invoke({
"input": "What's the weather like right now in Toronto?"
})
print("\nFinal Answer:", response1['output'])
print("\n--- Running Simple Driving Time Query ---")
response2 = agent_executor.invoke({
"input": "How long does it take to drive from Berlin to Munich?"
})
print("\nFinal Answer:", response2['output'])
print("\n--- Running Multi-Tool Query ---")
response3 = agent_executor.invoke({
"input": "Can you tell me the current weather in Rome and also how long it might take to drive there from Naples?"
})
print("\nFinal Answer:", response3['output'])
# Example of a query the agent should answer directly (if possible)
# print("\n--- Running Non-Tool Query ---")
# response4 = agent_executor.invoke({
# "input": "What is the capital of France?"
# })
# print("\nFinal Answer:", response4['output'])
Observe the output when verbose=True. You will see a pattern distinct from older ReAct agents:
Invoking: Weather Checker with {'location': 'Toronto'}.weather_tool and logs it.Pay close attention to how the agent uses the exact names and input schemas defined in your BaseTool subclasses. The quality of the tool description is essential for the agent's ability to select the correct tool.
This practical exercise demonstrated the fundamental workflow for creating a LangChain agent with custom capabilities:
BaseTool or the @tool decorator. Pay careful attention to the name, description, and args_schema.create_tool_calling_agent for modern LLMs to ensure reliable tool usage without parsing errors.AgentExecutor to manage the runtime loop.verbose=True to verify the agent selects the correct tools.From here, you can explore:
AgentExecutor to LangGraph, which provides a more controllable graph-based execution environment._arun methods) for I/O-bound tasks.Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
© 2026 ApX Machine LearningEngineered with