This initial example demonstrates assembling LangChain's modular components into a working application. It serves as a practical demonstration of LangChain's core utility: connecting modular components into a coherent, executable graph. We will build a simple program that takes a topic from a user, formats it into a prompt, sends it to a language model, and returns a structured response.
This process introduces the primary pattern for building with LangChain. You define the data flow, construct it by linking components, and then execute it with your input.
Our first application will use three essential components:
AIMessage object. An output parser is a simple component that transforms this raw output into a more usable format, like a plain string.LangChain Expression Language (LCEL) is the standard method for chaining these components together. It uses a syntax that resembles a pipe (|) operator, where the output of one component is passed as the input to the next. This creates a clear and readable definition of your application's logic.
The flow of our simple application can be visualized as a sequence where data is transformed at each step.
The data pipeline for the first application. An input dictionary is used to format a prompt, which is sent to the model. The model's response is then parsed into a final string.
Let's translate this structure into a Python script. Ensure you have your OpenAI API credentials configured in your environment as shown in the previous section.
First, we import the necessary classes. ChatOpenAI is our model interface, ChatPromptTemplate is for our prompt, and StrOutputParser will clean up the final output.
# main.py
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Load environment variables from .env file
load_dotenv()
# Ensure the OPENAI_API_KEY is set
if "OPENAI_API_KEY" not in os.environ:
raise ValueError("OPENAI_API_KEY environment variable not set.")
Next, we instantiate the three components. Our prompt template will contain a placeholder {topic} that will be filled with user input.
# 1. Initialize the Model
# We'll use gpt-4o-mini for its speed and capability.
# The temperature parameter controls randomness; 0 means more deterministic output.
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# 2. Define the Prompt Template
# This template instructs the model to act as a helpful assistant and generate
# a single, interesting fact about a given topic.
prompt = ChatPromptTemplate.from_template(
"You are a helpful assistant. Tell me one interesting fact about {topic}."
)
# 3. Initialize the Output Parser
# This will convert the model's message object into a simple string.
parser = StrOutputParser()
With the components defined, we can now link them together using the LCEL | operator to form a chain. This chain object is a runnable pipeline.
# 4. Construct the chain by piping components together
chain = prompt | model | parser
This single line of code defines the entire workflow. The prompt component will receive the initial input, its output will be sent to the model, and the model's output will be processed by the parser.
To run the chain, we use its invoke() method. The input for the chain must match the variables expected by the first component, which in this case is the prompt template. Since our template expects a {topic}, we pass a dictionary with a topic.
# 5. Invoke the chain with an input dictionary
input_data = {"topic": "the Eiffel Tower"}
response = chain.invoke(input_data)
print(response)
Running this script will produce a response from the model, such as:
The Eiffel Tower was originally intended to be a temporary installation for the 1889 Fair and was almost dismantled in 1909, but was saved because it was repurposed as a giant radiotelegraph antenna.
When invoke() is called, LangChain handles the entire sequence of operations:
input_data dictionary is passed to the prompt object.ChatPromptTemplate substitutes the {topic} placeholder with "the Eiffel Tower" and creates a formatted prompt.model (ChatOpenAI).AIMessage object.AIMessage object is passed to the parser (StrOutputParser).AIMessage.invoke() call.You have now successfully built and executed your first LangChain application. This simple prompt | model | parser structure forms the basis for nearly all applications you will build. In the following chapters, we will expand on this pattern by adding more sophisticated components for managing memory, connecting to external data, and enabling more complex, multi-step reasoning.
Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
© 2026 ApX Machine LearningAI Ethics & Transparency•