Before we begin, ensure your environment is prepared. We will use the Tavily Search API for our web search tool because of its excellent integration with LangChain and its focus on providing clean, AI-ready search results.
First, install the necessary Python packages. We include langgraph as it is the modern standard for building agent workflows in the LangChain ecosystem.
pip install langchain langchain-openai tavily-python langgraph
Next, you need to configure API keys for both OpenAI (for the LLM) and Tavily (for the search tool). You can get your Tavily API key from their website. Set these as environment variables for secure access.
export OPENAI_API_KEY="your-openai-api-key"
export TAVILY_API_KEY="your-tavily-api-key"
With our environment ready, we can start assembling the components of our agent.
An agent's capabilities are defined by the tools it has access to. For our purpose, a single tool is sufficient: a web searcher. LangChain provides a convenient integration for the Tavily Search API.
Let's import and initialize this tool. We will limit the number of search results to three to keep the context provided to the LLM concise.
from langchain_community.tools.tavily_search import TavilySearchResults
# Initialize the tool. We can configure the number of results it returns.
# Fewer results are faster and use fewer tokens.
tools = [TavilySearchResults(max_results=3)]
This tools list will be provided to our agent, making the TavilySearchResults function available for it to call.
The core of our agent is an LLM that can reason about which tool to use. For tasks that require following instructions and using function definitions, more advanced models perform better. We will use OpenAI's gpt-4o model.
from langchain_openai import ChatOpenAI
# Initialize the LLM. A temperature of 0 encourages more deterministic, factual outputs.
llm = ChatOpenAI(model="gpt-4o", temperature=0)
We will pass this LLM instance directly to our agent constructor in the next step.
To build the agent, we will use langgraph, which allows us to create robust, stateful agent workflows. The create_react_agent function constructs a graph that acts as the runtime for our agent. It manages the cycle of calling the LLM, interpreting the decision (thought), executing the chosen tool (action), and feeding the output back to the model (observation).
The diagram below illustrates this execution loop.
The agent graph manages the cycle of the LLM reasoning, taking action with a tool, observing the result, and feeding it back into the reasoning process.
Now, let's create the agent in code.
from langgraph.prebuilt import create_react_agent
# The create_react_agent function binds the LLM to the tools and sets up
# the execution graph that manages the interaction loop.
agent_executor = create_react_agent(llm, tools)
With all the components assembled, our agent is ready to use. Let's ask it a question that requires searching for information from two different domains: technology and finance.
The agent expects a dictionary containing a list of messages as input. We format our question as a user message.
question = "What was the stock price of NVIDIA (NVDA) at the time of the announcement of the Blackwell GPU architecture?"
# Invoke the agent with the user input
response = agent_executor.invoke({"messages": [("user", question)]})
print("\nFinal Answer:")
# The final answer is contained in the last message of the returned history
print(response["messages"][-1].content)
When the agent runs, it performs the necessary steps internally. A trace of its reasoning process looks like this:
Thought: The user is asking for the stock price of NVIDIA (NVDA) at a specific time: the announcement of the Blackwell GPU architecture. I need to find two pieces of information:
1. The date of the Blackwell GPU architecture announcement.
2. The stock price of NVDA on that date.
I will start by searching for the announcement date.
Action:
{
"tool": "tavily_search_results",
"tool_input": "NVIDIA Blackwell GPU architecture announcement date"
}
Observation: NVIDIA officially announced its Blackwell architecture during the GTC 2024 keynote on March 18, 2024.
Thought: I have found the announcement date: March 18, 2024. Now I need to find the stock price of NVDA on or around that date. I can perform another search for this information.
Action:
{
"tool": "tavily_search_results",
"tool_input": "NVDA stock price on March 18, 2024"
}
Observation: On March 18, 2024, NVIDIA Corp. (NVDA) stock closed at $884.55.
Thought: I have found both pieces of information.
1. The Blackwell GPU architecture was announced on March 18, 2024.
2. The closing stock price of NVDA on that day was $884.55.
I can now provide the final answer to the user.
Final Answer: NVIDIA announced the Blackwell GPU architecture on March 18, 2024. On that date, the closing stock price for NVDA was $884.55.
This sequence demonstrates the agent's capability. It correctly decomposed the complex query into sub-problems, executed its search tool sequentially to gather the necessary facts, and then synthesized those facts into a complete and accurate answer. You have now built a simple but effective autonomous system that can interact with the outside environment to accomplish a goal. This fundamental pattern is the basis for more complex agents that can interact with databases, APIs, and other software systems.
Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
© 2026 ApX Machine LearningEngineered with