LLM frameworks are built from core components such as models, prompt templates, parsers, chains, memory, and agents that can utilize tools. Agents are designed to reason and interact with external tools. A simple agent is developed using LangChain that uses an LLM for reasoning and interacts with an external tool, specifically a web search engine.
This exercise demonstrates how agents can access external information sources to answer questions.
Before we begin, ensure you have the necessary libraries installed. You'll also need an API key for the LLM provider you choose (we'll use OpenAI in this example).
Install Libraries:
pip install langchain langchain_openai duckduckgo-search
langchain: The core LangChain library.langchain_openai: Provides integrations for OpenAI models.duckduckgo-search: A library to perform DuckDuckGo searches, which LangChain can use as a tool.Set API Key: Make sure your OpenAI API key is available as an environment variable. You can set it in your terminal session or using a .env file (which requires python-dotenv to be installed: pip install python-dotenv).
export OPENAI_API_KEY='your-api-key-here'
Or, create a .env file in your project directory:
OPENAI_API_KEY='your-api-key-here'
And load it in your Python script using:
# import os
# from dotenv import load_dotenv
# load_dotenv()
# openai_api_key = os.getenv("OPENAI_API_KEY")
Note: Always handle API keys securely. Avoid hardcoding them directly in your source code, especially if you plan to share or version control it. Using environment variables is a standard practice.
Let's create a Python script to build and run our agent.
import os
from langchain_openai import ChatOpenAI
from langchain import hub # Hub for pre-made prompts
from langchain.agents import AgentExecutor, create_openai_tools_agent, load_tools
# Ensure your OPENAI_API_KEY environment variable is set
# 1. Initialize the LLM
# We use ChatOpenAI for its ability to use OpenAI's "tools" feature (function calling)
llm = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0)
# 2. Load Tools
# LangChain provides convenient loaders for common tools.
# "ddg-search" uses the DuckDuckGo search engine.
tools = load_tools(["ddg-search"])
# print(f"Loaded tools: {[tool.name for tool in tools]}")
# Expected Output: Loaded tools: ['duckduckgo_search']
# 3. Get the Agent Prompt Template
# We pull a pre-built prompt optimized for OpenAI tools agents from LangChain Hub.
# This prompt includes placeholders for input, tools, and intermediate steps (agent_scratchpad).
prompt = hub.pull("hwchase17/openai-tools-agent")
# You can inspect the prompt template like this:
# print(prompt.pretty_repr())
# 4. Create the Agent
# The create_openai_tools_agent function binds the LLM, tools, and prompt together.
# It uses OpenAI's function calling capabilities to determine which tool to use.
agent = create_openai_tools_agent(llm, tools, prompt)
# 5. Create the Agent Executor
# The AgentExecutor runs the agent in a loop.
# It calls the agent, determines the action (which tool to use and what input to provide),
# executes the tool, gets the result (observation), and passes it back to the agent
# until the agent decides it has the final answer.
# 'verbose=True' shows the agent's thought process.
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# 6. Run the Agent
# Let's ask a question that requires up-to-date external information.
print("\nRunning Agent with Query 1:")
response1 = agent_executor.invoke({
"input": "What is the current capital of Brazil and what was the previous one?"
})
print("\nFinal Answer (Query 1):")
print(response1['output'])
# Example 2: A question requiring calculation or recent info
print("\nRunning Agent with Query 2:")
response2 = agent_executor.invoke({
"input": "Who won the men's singles final at Wimbledon in 2023?"
})
print("\nFinal Answer (Query 2):")
print(response2['output'])
When you run this script, setting verbose=True in the AgentExecutor is informative. You'll see output similar to this (details may vary):
> Entering new AgentExecutor chain...
Invoking: `duckduckgo_search` with `{'query': 'current capital of Brazil previous capital'}`
tool_output
Brasília is the federal capital of Brazil and seat of government of the Federal District. The city is located high in the Brazilian highlands in the country's center-western region. It was founded by President Juscelino Kubitschek on April 21, 1960, to serve as the new national capital. Rio de Janeiro was the capital of Brazil for almost two centuries, from 1763 until 1960, when the government institutions were transferred to the newly built city of Brasília. Salvador was the first capital of Brazil, from 1549 to 1763. Brazil has had three capitals: Salvador, Rio de Janeiro and Brasília.
The current capital of Brazil is Brasília.
The previous capital of Brazil was Rio de Janeiro (from 1763 to 1960). Before Rio de Janeiro, the first capital was Salvador (from 1549 to 1763).
> Finished chain.
Final Answer (Query 1):
The current capital of Brazil is Brasília. The previous capital was Rio de Janeiro.
Running Agent with Query 2:
> Entering new AgentExecutor chain...
Invoking: `duckduckgo_search` with `{'query': "men's singles winner Wimbledon 2023"}`
tool_output
Spain's Carlos Alcaraz holds the winner's trophy following victory against Serbia's Novak Djokovic in the men's singles final tennis match on day fourteen of the 2023 Wimbledon Championships at The All England Lawn Tennis Club in Wimbledon, southwest London, on July 16, 2023. Carlos Alcaraz defeated Novak Djokovic in the final, 1-6, 7-6(8-6), 6-1, 3-6, 6-4 to win the gentlemen's singles tennis title at the 2023 Wimbledon Championships. It was his second major title, following the 2022 US Open. Alcaraz became the third Spanish man to win Wimbledon, after Manuel Santana and Rafael Nadal. Wimbledon 2023 men's final result: Carlos Alcaraz beats Novak Djokovic in five-set thriller Updated / Sunday, 16 Jul 2023 21:10 Carlos Alcaraz ended Novak Djokovic's reign at Wimbledon... Carlos Alcaraz won the men's singles title at Wimbledon in 2023.
> Finished chain.
Final Answer (Query 2):
Carlos Alcaraz won the men's singles final at Wimbledon in 2023, defeating Novak Djokovic.
Let's break down what happened:
AgentExecutor received the input dictionary containing the user's question.hwchase17/openai-tools-agent prompt and its function calling training, determined:
duckduckgo_search tool could find the information.AgentExecutor identified the chosen tool (duckduckgo_search) and the input generated by the LLM. It executed the search.AgentExecutor as the "observation".AgentExecutor recognized that the LLM provided a final answer (not another tool call) and returned the result.Now that you have a basic agent running, try these modifications:
llm-math tool using load_tools(["ddg-search", "llm-math"], llm=llm) and ask questions involving calculations (e.g., "What is the square root of the approximate population of Tokyo?"). Observe how the agent selects between search and the calculator.prompt object (print(prompt.pretty_repr())) to see the structure and instructions the agent uses. Understand the roles of input, tools, tool_names, and agent_scratchpad.gpt-4 (if accessible) by changing ChatOpenAI(model="..."). Observe if the reasoning or tool usage changes. Be mindful of potential cost differences.This hands-on exercise provides a foundational understanding of how agents operate within frameworks like LangChain. By combining LLM reasoning with external tools, you can build applications capable of performing complex tasks that require accessing and processing up-to-date or specialized information.
Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
© 2026 ApX Machine LearningEngineered with