An agent's ability to perform actions is entirely dependent on the tools it has at its disposal. LangChain provides an extensive library of pre-built tools that connect agents to a wide range of external services and data sources, from search engines to scientific calculators. Using these tools allows you to equip your agent with powerful capabilities without writing the underlying integration code from scratch.
The most direct way to get started is by using a tool that requires no external setup. The DuckDuckGoSearchRun tool is a good example, as it provides web search functionality without needing an API key.
Let's initialize an agent and provide it with this search tool. The agent's reasoning engine, the LLM, will then be able to decide when and how to use the search tool to answer a query.
from langchain_openai import ChatOpenAI
from langchain import hub
from langchain.agents import create_react_agent, AgentExecutor
from langchain_community.agent_toolkits import load_tools
# Initialize the LLM
# Make sure your OPENAI_API_KEY is set in your environment
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
# Load the built-in tools
# LangChain provides a helper function 'load_tools' for convenience
tools = load_tools(["ddg-search", "llm-math"], llm=llm)
# Get the prompt to use - you can modify this!
# We pull the standard ReAct prompt from the LangChain Hub
prompt = hub.pull("hwchase17/react")
# Initialize the agent
# We use the create_react_agent constructor for ReAct agents
agent = create_react_agent(llm, tools, prompt)
# Create an agent executor to manage the execution loop
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Run the agent with a query
query = "What was the score of the 2022 FIFA World Cup final, and what is that score raised to the power of 0.25?"
agent_executor.invoke({"input": query})
When you execute this code, the verbose=True parameter allows you to observe the agent's reasoning steps. You will see an output that looks something like this:
> Entering new AgentExecutor chain...
Thought: I need to find the score of the 2022 FIFA World Cup final and then calculate that score raised to the power of 0.25. I will first search for the score and then use the calculator to do the math.
Action: duckduckgo_search
2022 FIFA Cup final score
Observation: The final score of the 2022 FIFA World Cup was Argentina 3–3 France. Argentina won 4–2 on penalties.
Thought: The score during the match was 3-3, which means a total of 6 goals. I will calculate 6 raised to the power of 0.25.
Action: Calculator
Action Input: 6^0.25
Observation: Answer: 1.56508458007
Thought: I have the final answer now.
Final Answer: The score of the 2022 FIFA World Cup final was 3-3 (Argentina won on penalties). The total score of 6 raised to the power of 0.25 is approximately 1.565.
> Finished chain.
This output reveals the agent's internal monologue. It correctly identified the two sub-tasks, chose the appropriate tool for each step, and synthesized the observations into a final, coherent answer.
This diagram illustrates an agent receiving a user query, using its LLM reasoning engine to select the appropriate tool (Search), executing the tool to get an observation, and then generating a final answer.
Many of LangChain's most powerful built-in tools act as wrappers around third-party APIs, such as those from Google, Wikipedia, or WolframAlpha. To use these, you typically need to install the provider's Python SDK and set an API key as an environment variable.
For example, to use the WolframAlpha tool for computational queries, you first need to get an AppID from their developer portal.
pip install wolframalpha
.env file, set your AppID.
export WOLFRAM_ALPHA_APPID="YOUR_APP_ID_HERE"
Once the environment variable is set, LangChain's tool loader can automatically configure the tool.
from langchain_openai import ChatOpenAI
from langchain import hub
from langchain.agents import create_react_agent, AgentExecutor
from langchain_community.agent_toolkits import load_tools
import os
# Ensure your keys are set as environment variables
# os.environ["OPENAI_API_KEY"] = "sk-..."
# os.environ["WOLFRAM_ALPHA_APPID"] = "YOUR_APP_ID"
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
# Load the WolframAlpha tool
# LangChain automatically finds and uses the WOLFRAM_ALPHA_APPID environment variable
tools = load_tools(["wolfram-alpha"], llm=llm)
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "What is the second derivative of x^4 * sin(x)?"})
The agent will now recognize that this mathematical question is best suited for WolframAlpha and will delegate the task accordingly. This pattern of installing a package and setting an environment variable applies to most API-based tools in LangChain, providing a consistent way to expand your agent's capabilities.
While built-in tools cover a wide array of common tasks, you will often need to grant an agent access to your own internal APIs or proprietary data sources. The next section covers how to create custom tools to give your agent these specialized abilities.
Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
© 2026 ApX Machine LearningEngineered with