While a single chain is effective for self-contained tasks, most applications involve a series of operations. For example, to generate a technical blog post, you might first create an outline, then write an introduction based on the outline, and finally, generate social media posts to promote it. Each step depends on the output of the one before it. LangChain allows you to sequence these operations into pipelines using the LangChain Expression Language (LCEL).
The most direct way to link operations is by composing them into a linear sequence where each step passes its output to the next.
Consider a two-step process:
The data flows in a straight line: topic -> title -> synopsis.
Data flow in a linear sequence. Each component feeds its output to the next.
Let's implement this. We will set up two chains using prompts and a chat model. We use StrOutputParser to ensure the output from the model is a clean string, ready for the next step.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
# Assumes OPENAI_API_KEY is set in your environment
llm = ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo")
# Chain 1: Generate a title for a play
prompt_title = ChatPromptTemplate.from_template(
"You are a playwright. Given the topic '{topic}', write a catchy title for a play."
)
chain_title = prompt_title | llm | StrOutputParser()
# Chain 2: Generate a synopsis for the play
prompt_synopsis = ChatPromptTemplate.from_template(
"You are a theater critic. Given the play title '{title}', write a short, one-paragraph synopsis."
)
chain_synopsis = prompt_synopsis | llm | StrOutputParser()
To connect these, we need to ensure the output of chain_title matches the expected input of chain_synopsis. Since chain_synopsis expects a dictionary with a title key, and chain_title returns a string, we map the output explicitly using a dictionary with RunnablePassthrough.
# Create the sequence using the pipe operator
overall_chain = (
chain_title
| {"title": RunnablePassthrough()}
| chain_synopsis
)
# Run the chain with the initial topic
topic = "the rise and fall of a 1920s jazz musician"
final_synopsis = overall_chain.invoke({"topic": topic})
print(final_synopsis)
When you run this chain, LangChain executes chain_title, takes the resulting string, maps it to the title key, and passes it to chain_synopsis.
The primary limitation of a strictly linear flow is that intermediate data is often lost or unavailable to later steps unless explicitly passed along.
For more complex workflows, you often need to maintain state across multiple steps. This is necessary when:
The RunnablePassthrough.assign() method manages this by adding new values to the dictionary flowing through the chain without overwriting existing data. It treats the workflow as a cumulative state.
Let's modify our previous example. Suppose the synopsis chain needs to know both the generated title and the original topic to add more context.
Data flow with state management.
assignadds outputs to the state, making them available for subsequent steps.
To implement this, we chain assign calls. Each step calculates a value and appends it to the state.
# Setup the LLM and Prompts
llm = ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo")
# Chain 1 Definition: Just the logic to get the title
prompt_title = ChatPromptTemplate.from_template(
"You are a playwright. Given the topic '{topic}', write a catchy title for a play."
)
chain_title = prompt_title | llm | StrOutputParser()
# Chain 2 Definition: Expects 'title' and 'topic'
prompt_synopsis = ChatPromptTemplate.from_template(
"Write a short, one-paragraph synopsis for a play titled '{title}' about '{topic}'."
)
chain_synopsis = prompt_synopsis | llm | StrOutputParser()
Now we construct the pipeline. We use assign to capture the outputs at each stage.
# Create the chain with state management
overall_chain = (
# Step 1: Calculate title and add it to the stream.
# The stream now contains {'topic': ..., 'title': ...}
RunnablePassthrough.assign(title=chain_title)
|
# Step 2: Calculate synopsis.
# It has access to both 'topic' and 'title' from the stream.
RunnablePassthrough.assign(synopsis=chain_synopsis)
)
# Run the chain
input_data = {"topic": "the rise and fall of a 1920s jazz musician"}
result = overall_chain.invoke(input_data)
print(result)
The output of this invocation is a dictionary containing the original input and all assigned variables.
{
"topic": "the rise and fall of a 1920s jazz musician",
"title": "The Blue Note's Last Refrain",
"synopsis": "Set against the vibrant, chaotic backdrop of the Roaring Twenties, 'The Blue Note's Last Refrain' chronicles the meteoric rise and heartbreaking fall of saxophonist Leo 'King' Creole..."
}
By using RunnablePassthrough.assign, you gain fine-grained control over the data pipeline. This structure allows you to preserve and reuse information from any point in the sequence, which is essential for sophisticated, multi-step reasoning processes.
Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
© 2026 ApX Machine LearningEngineered with