Alright, let's put theory into practice. This hands-on exercise will guide you through wrapping a publicly available API, transforming it into a functional tool that an LLM agent can utilize. We'll focus on the Open-Meteo API, a free weather forecast service that doesn't require an API key for basic use, making it ideal for this learning exercise. Our goal is to create a tool that allows an LLM agent to fetch the current weather for a given geographical coordinate.
Throughout this process, we will touch upon several topics discussed earlier in this chapter, such as defining a tool's interface for the LLM, making the API call, parsing the response, and structuring the output in a way that's useful for the agent.
We've chosen the Open-Meteo API for its simplicity and accessibility. Specifically, we'll use its endpoint for current weather data. A typical request to get current weather for Berlin (Latitude: 52.52, Longitude: 13.41) looks like this:
https://api.open-meteo.com/v1/forecast?latitude=52.52&longitude=13.41¤t_weather=true
This request returns a JSON response containing various weather details. For our tool, we'll aim to extract and present the temperature and wind speed.
Before writing any Python code for the tool itself, we must first define how the LLM will perceive and use this tool. This involves deciding on a name, a clear description, and the expected input and output schemas.
get_current_weather
latitude
(float, required): The latitude of the location.longitude
(float, required): The longitude of the location.temperature_celsius
(float): The current temperature in degrees Celsius.wind_speed_kmh
(float): The current wind speed in kilometers per hour.summary
(string): A brief textual summary of the weather conditions.This definition is what the LLM agent will use to understand when and how to call our tool. A precise description helps the LLM make better decisions.
Now, let's implement the Python function that will act as our tool. We'll use the requests
library to make the HTTP call. If you don't have it installed, you can install it using pip: pip install requests
.
import requests
import json
def get_current_weather_from_api(latitude: float, longitude: float) -> dict:
"""
Fetches current weather data from the Open-Meteo API for a given
latitude and longitude.
Args:
latitude: The latitude of the location.
longitude: The longitude of the location.
Returns:
A dictionary containing temperature, wind speed, and a summary,
or an error message if the API call fails.
"""
base_url = "https://api.open-meteo.com/v1/forecast"
params = {
"latitude": latitude,
"longitude": longitude,
"current_weather": "true"
}
try:
response = requests.get(base_url, params=params)
response.raise_for_status() # Raises an HTTPError for bad responses (4XX or 5XX)
data = response.json()
# Extracting relevant information
current_weather_data = data.get("current_weather", {})
temperature = current_weather_data.get("temperature")
wind_speed = current_weather_data.get("windspeed")
weather_code = current_weather_data.get("weathercode")
if temperature is None or wind_speed is None:
return {"error": "Could not retrieve complete weather data from API response."}
# A very simple interpretation of weather codes for the summary
# For a production tool, this would be more comprehensive
weather_summary = f"Temperature: {temperature}°C, Wind Speed: {wind_speed} km/h."
if weather_code is not None:
if weather_code == 0:
weather_summary += " Condition: Clear sky."
elif weather_code in [1, 2, 3]:
weather_summary += " Condition: Mainly clear to partly cloudy."
elif weather_code > 40 and weather_code < 70 : # various rain codes
weather_summary += " Condition: Rainy."
# Add more conditions as needed
else:
weather_summary += " Condition: Unspecified (code: " + str(weather_code) + ")."
# This is the structured output we defined earlier
return {
"temperature_celsius": float(temperature),
"wind_speed_kmh": float(wind_speed),
"summary": weather_summary
}
except requests.exceptions.HTTPError as http_err:
return {"error": f"HTTP error occurred: {http_err}"}
except requests.exceptions.RequestException as req_err:
return {"error": f"Request error occurred: {req_err}"}
except json.JSONDecodeError:
return {"error": "Failed to decode API response."}
except Exception as e:
return {"error": f"An unexpected error occurred: {str(e)}"}
# Example usage (for direct testing):
if __name__ == "__main__":
# Berlin coordinates
berlin_lat = 52.52
berlin_lon = 13.41
weather_info = get_current_weather_from_api(berlin_lat, berlin_lon)
print(json.dumps(weather_info, indent=2))
# Example of an invalid request (e.g., out of bounds coordinates)
# Open-Meteo API might still return something, but it's good to test edge cases
invalid_lat = 200.0
invalid_lon = 200.0
error_info = get_current_weather_from_api(invalid_lat, invalid_lon)
print(f"\nTest with invalid coordinates (expected API error or handled error):")
print(json.dumps(error_info, indent=2))
Key aspects of this implementation:
requests.get()
to fetch data.response.json()
parses the JSON response into a Python dictionary.temperature
and windspeed
. Notice how we are selecting specific pieces of information and transforming them into our desired output schema. This is a simple form of summarizing and presenting API data for the LLM. The weather_summary
string is also a form of processed output.try-except
block handles potential issues like network problems (requests.exceptions.RequestException
), HTTP errors from the API (e.g., 404 Not Found, 500 Server Error via response.raise_for_status()
), and issues with parsing the response (json.JSONDecodeError
). Returning an error dictionary allows the agent or the orchestrator to handle failures gracefully.Output Schema
we defined. This consistency is important for the LLM.While the exact method of registering this tool with an LLM agent depends on the specific framework (like LangChain, LlamaIndex, or a custom agent loop), the core components are:
get_current_weather_from_api
is the executable code.get_current_weather
), description, and schemas for input/output are provided to the LLM agent. The agent uses this metadata to decide when to call the function and what parameters to pass.For instance, in a LangChain-like setup, you might wrap get_current_weather_from_api
in a Tool
object, providing the name, description, and potentially an args_schema
based on Pydantic models for input validation.
The LLM, when faced with a query like "What's the weather like in Berlin (latitude 52.52, longitude 13.41)?", would use the description of get_current_weather
to identify it as relevant, extract the latitude
and longitude
from the query, and invoke our Python function. The structured JSON output from our tool is then passed back to the LLM, which can use it to formulate a natural language answer.
Direct Testing:
As shown in the if __name__ == "__main__":
block, you can test the Python function directly:
# Berlin coordinates
berlin_lat = 52.52
berlin_lon = 13.41
weather_info = get_current_weather_from_api(berlin_lat, berlin_lon)
print(json.dumps(weather_info, indent=2))
# Example of an invalid location, e.g. middle of the ocean or invalid coordinates
# Open-Meteo API might handle this with an error or specific values
pacific_lat = 0.0
pacific_lon = -150.0 # Middle of the Pacific
pacific_weather = get_current_weather_from_api(pacific_lat, pacific_lon)
print(f"\nWeather in mid-Pacific:")
print(json.dumps(pacific_weather, indent=2))
This helps ensure the core logic, API interaction, and data parsing work correctly.
Testing via an LLM Agent: Once integrated into an agent framework, you would test it by posing questions to the LLM that should trigger the tool. Examples:
Observe if the LLM correctly identifies the need for the tool, extracts parameters accurately, and if the tool executes successfully providing the structured data back to the LLM for its final response.
This practical exercise has demonstrated several principles discussed earlier:
response.json()
and dictionary navigation.headers
or params
of the requests.get()
call. Never hardcode sensitive keys directly in your tool's source code for production systems. For example:
# Hypothetical API key usage
# import os
# api_key = os.environ.get("MY_WEATHER_API_KEY")
# headers = {"Authorization": f"Bearer {api_key}"}
# response = requests.get(url, params=params, headers=headers)
503 Service Unavailable
) is also a common requirement. Libraries like tenacity
can simplify implementing retry mechanisms.This exercise provides a foundational template. You can adapt this approach to wrap a wide variety of other public or private APIs, significantly expanding the capabilities of your LLM agents. The key is always to clearly define the tool's purpose and interface for the LLM, and to robustly handle the interaction with the external service.
Was this section helpful?
© 2025 ApX Machine Learning