To create a single, reusable unit from individual components that interact with a language model, the chain sequence serves as the fundamental pattern. This links a prompt template directly to a model, forming the simplest pipeline. It takes user input, formats it with the template, and then returns the model's output.
At its core, this sequence wraps the process of taking input variables, using them to construct a prompt, sending that prompt to an LLM, and returning the result. This workflow is the central mechanism of almost every LLM-powered application.
The flow of data through a chain is direct and predictable. Input, typically a dictionary, provides the variables for the prompt template. The template then formats these variables into a complete prompt string, which is passed to the language model. The model generates a response, which is the final output.
Data flow within a basic chain, from input variables to the final output from the model.
Let's build a simple chain that generates a creative name for a tech startup based on a short description of its product. This requires a prompt template to guide the LLM and a model to perform the generation.
First, ensure you have the necessary libraries installed and your environment variables (like OPENAI_API_KEY) are configured as discussed in Chapter 1.
#
# main.py
#
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
# 1. Instantiate the Language Model
# We'll use OpenAI's gpt-3.5-turbo model, with a low temperature
# to encourage more predictable, less random outputs.
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.3)
# 2. Define the Prompt Template
# The template expects one input variable: 'product_description'.
template = """
You are a creative naming expert. Generate a single, catchy name
for a tech startup that specializes in {product_description}.
Name:
"""
prompt = PromptTemplate(
input_variables=["product_description"],
template=template
)
# 3. Create the Chain
# We use the pipe operator (|) to connect the prompt and the model.
# This syntax is known as the LangChain Expression Language (LCEL).
name_generation_chain = prompt | llm
# 4. Run the chain with an input
# We use the .invoke() method and pass a dictionary containing the
# variable required by our prompt template.
product_description = "an AI-powered platform for personal finance management"
result = name_generation_chain.invoke({"product_description": product_description})
print(result)
Running this code will produce output similar to this:
content='Finara' response_metadata={...} id='...'
The output is an AIMessage object (since we are using a Chat Model). The generated text is stored in the content attribute. This object structure preserves metadata about the generation, which can be useful for debugging or logging.
While the basic chain above works well, we often want just the raw string output rather than the full message object. To achieve this, we can extend our sequence using the LangChain Expression Language (LCEL).
LCEL allows us to compose chains using the pipe operator (|), similar to shell commands on a Unix system. We can add an output parser to the end of our chain to automatically extract the string content.
#
# main_lcel.py
#
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Components remain the same
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.3)
prompt = PromptTemplate(
input_variables=["product_description"],
template="""You are a creative naming expert. Generate a single, catchy name
for a tech startup that specializes in {product_description}.
Name:"""
)
# The StrOutputParser extracts the text content from the message.
output_parser = StrOutputParser()
# Construct the chain using the pipe operator
# prompt -> llm -> output_parser
name_generation_chain_lcel = prompt | llm | output_parser
# Invoke the chain with the same input
product_description = "an AI-powered platform for personal finance management"
result = name_generation_chain_lcel.invoke({"product_description": product_description})
print(result)
This updated version produces a cleaner output:
Finara
By piping the output of the model into StrOutputParser, we directly get the string response. The LCEL syntax simplifies the code and makes it easier to add, remove, or swap components. For instance, adding a different output parser, as you learned about in Chapter 2, is as simple as changing the last element in the sequence.
Using this composable pattern provides significant benefits:
invoke, batch, stream), making them interchangeable.Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
© 2026 ApX Machine LearningEngineered with