In the previous chapter, we explored the fundamental components of LangChain: interacting with language models, crafting effective prompts, and parsing the generated output. These elements are powerful on their own, but many real-world applications require more than a single call to an LLM. Often, you need to perform a sequence of operations, where the output of one step feeds into the next. This is where LangChain Chains come into play.
Think of Chains as the assembly lines for your LLM workflows. They provide a structured way to connect multiple components, such as LLMs, prompt templates, output parsers, or even other chains, into a coherent sequence. By linking these elements, you can build applications that perform multi-step reasoning, data transformation, or task decomposition.
At its core, a chain executes a series of steps in a defined order. The defining characteristic is that the output generated by one step in the sequence typically becomes the input for the subsequent step. This allows you to orchestrate complex tasks by breaking them down into smaller, manageable parts.
For example, imagine you want to:
This requires two distinct LLM calls. A chain allows you to execute these calls sequentially, automatically passing the generated slogan from the first step to the translation step.
The most fundamental type of chain is the LLMChain
, which we touched upon earlier. It combines a prompt template, a language model, and optionally an output parser. While simple, LLMChain
instances are the building blocks often used within more complex sequential chains.
# Assuming 'llm' is an initialized language model instance
# Assuming 'prompt_template' is an initialized PromptTemplate instance
from langchain.chains import LLMChain
# An LLMChain takes a prompt template and an LLM
basic_chain = LLMChain(llm=llm, prompt=prompt_template)
# You can run it by providing the input variables for the prompt
input_data = {"product": "A durable, lightweight hiking backpack"}
response = basic_chain.run(input_data)
print(response)
# Expected output: A string generated by the LLM based on the prompt
For straightforward sequences where each step takes a single string input and produces a single string output, LangChain provides the SimpleSequentialChain
. It takes a list of chains and executes them in order, feeding the output of one directly as the input to the next.
Let's implement the slogan generation and translation example:
# Assuming 'llm' is an initialized language model instance
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SimpleSequentialChain
# Chain 1: Generate a slogan
template1 = "Generate a catchy, short marketing slogan for a {product_description}."
prompt1 = PromptTemplate(input_variables=["product_description"], template=template1)
chain_one = LLMChain(llm=llm, prompt=prompt1)
# Chain 2: Translate the slogan to French
template2 = "Translate the following slogan into French: {slogan}"
prompt2 = PromptTemplate(input_variables=["slogan"], template=template2)
chain_two = LLMChain(llm=llm, prompt=prompt2)
# Combine them using SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[chain_one, chain_two], verbose=True)
# Run the combined chain
product_info = "waterproof, solar-powered camping lantern"
french_slogan = overall_chain.run(product_info)
print(french_slogan)
When you run this, SimpleSequentialChain
first executes chain_one
with the product_info
. The output string (the slogan) is then automatically passed as the input (slogan
) to chain_two
. The final output is the result from chain_two
(the French translation). The verbose=True
argument helps visualize this flow by printing intermediate steps.
SimpleSequentialChain
is convenient but limited to single string inputs/outputs between steps. What if a later step needs information generated by multiple earlier steps, or if a step produces multiple outputs? This is where the more general SequentialChain
is used.
SequentialChain
allows you to explicitly define the input and output variables for each chain in the sequence. This provides greater control over how data flows between the steps.
Consider a scenario:
Notice that step 3 requires input from both step 1 (the original topic) and step 2 (the keywords).
# Assuming 'llm' is an initialized language model instance
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SequentialChain
# Chain 1: Generate Explanation
template_explain = "Provide a brief explanation of the topic: {topic}"
prompt_explain = PromptTemplate(input_variables=["topic"], template=template_explain)
# Output key defaults to 'text', let's rename it for clarity
chain_explain = LLMChain(llm=llm, prompt=prompt_explain, output_key="explanation")
# Chain 2: Identify Keywords
template_keywords = "Identify the main keywords in the following text:\n{explanation}"
prompt_keywords = PromptTemplate(input_variables=["explanation"], template=template_keywords)
chain_keywords = LLMChain(llm=llm, prompt=prompt_keywords, output_key="keywords")
# Chain 3: Write Introduction
template_intro = "Write a short introductory paragraph about {topic} using these keywords: {keywords}"
prompt_intro = PromptTemplate(input_variables=["topic", "keywords"], template=template_intro)
chain_intro = LLMChain(llm=llm, prompt=prompt_intro, output_key="intro_paragraph")
# Combine using SequentialChain
complex_chain = SequentialChain(
chains=[chain_explain, chain_keywords, chain_intro],
input_variables=["topic"], # Input for the entire sequence
# Output variables from the sequence (specify which ones you want)
output_variables=["explanation", "keywords", "intro_paragraph"],
verbose=True
)
# Run the complex chain
input_topic = "Quantum Computing"
result = complex_chain({"topic": input_topic})
print("\n--- Results ---")
print(f"Explanation:\n{result['explanation']}")
print(f"\nKeywords:\n{result['keywords']}")
print(f"\nIntro Paragraph:\n{result['intro_paragraph']}")
In SequentialChain
, you define input_variables
for the overall sequence and output_variables
that you want returned at the end. LangChain automatically manages the intermediate outputs (explanation
and keywords
) based on the output_key
specified in each LLMChain
and the input_variables
expected by subsequent chains.
Understanding the flow of data is important, especially as chains become more complex. Here's a simple diagram representing the SimpleSequentialChain
example (slogan generation and translation):
Diagram showing the sequential processing of a product description through two LLM chains to produce a translated slogan.
Employing chains in your LangChain applications offers several advantages:
Chains provide a powerful mechanism for structuring sequential workflows. However, some tasks require more dynamic behavior, where the next step isn't predetermined but depends on the outcome of the previous one. For such scenarios, LangChain offers Agents, which use an LLM's reasoning capabilities to decide which actions to take next. We will examine Agents in the subsequent section.
© 2025 ApX Machine Learning