A content generation pipeline is a practical, multi-step application constructed using LangChain Expression Language (LCEL). This pipeline performs two distinct tasks in order: first, it generates a blog post outline based on a given topic, and second, it writes an introduction for that blog post using the generated outline.
This process mirrors many workflows where a complex task is broken down into smaller, manageable sub-tasks, with the output of one step feeding into the next.
Before building the final pipeline, we need to define each individual operation that will become a part of it. Our pipeline requires two steps: one for creating an outline and another for writing the introduction.
First, let's set up our language model and necessary imports. For this example, we will use ChatOpenAI.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
# Initialize the chat model. Ensure your OpenAI API key is configured.
llm = ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo")
The first step in our pipeline is to generate a structured outline for a blog post. This chain will take a single input, topic, and produce an output string.
We define a PromptTemplate that instructs the model to create a bulleted list. Then, we create a sequence using the pipe | operator, connecting the prompt, the model, and a StrOutputParser to ensure the output is a clean string.
# Prompt and chain for generating the blog post outline
prompt_outline = PromptTemplate(
input_variables=["topic"],
template="Create a concise, bulleted outline for a blog post about {topic}."
)
outline_chain = prompt_outline | llm | StrOutputParser()
The second chain's job is to write an introduction. This chain is more complex because it requires two inputs: the original topic and the outline produced by the first chain. Its PromptTemplate reflects this, instructing the model to use the provided outline as a guide.
# Prompt and chain for writing the introduction
prompt_intro = PromptTemplate(
input_variables=["topic", "outline"],
template="Write an engaging introduction for a blog post about {topic}, using the following outline as a guide:\n\n{outline}"
)
intro_chain = prompt_intro | llm | StrOutputParser()
With both chains defined, we can assemble them into a single workflow using LCEL primitives. We use RunnablePassthrough.assign to manage the data flow. This function allows us to calculate new values (like the outline or introduction) and add them to the dictionary of inputs passing through the pipeline.
The diagram below illustrates the data flow. The initial topic is used by both chains, while the outline output from the first chain becomes an input for the second.
A diagram of the content generation pipeline. The
topicis used by both chains, while theoutlinegenerated by the first chain is passed to the second.
When constructing the pipeline, we chain the assignments together. First, we assign the output of the outline_chain to the key outline. Next, we assign the output of the intro_chain to the key introduction. This structure ensures that the second chain has access to both the initial topic and the generated outline.
# Create the pipeline using RunnablePassthrough.assign
# This allows us to add new keys to the dictionary as we progress
content_pipeline = (
RunnablePassthrough.assign(outline=outline_chain)
| RunnablePassthrough.assign(introduction=intro_chain)
)
Now, we can execute the entire pipeline with a single call. We provide a dictionary containing our initial input variable, "topic".
# Provide a topic and run the pipeline
topic = "the benefits of serverless computing"
result = content_pipeline.invoke({"topic": topic})
The final result is a dictionary containing all keys accumulated during the process, including our inputs and outputs. Let's inspect the results.
# Print the results
print("\n------ GENERATED OUTLINE ------")
print(result["outline"])
print("\n------ GENERATED INTRODUCTION ------")
print(result["introduction"])
Example Output:
------ GENERATED OUTLINE ------
- Introduction to Serverless Computing
- Cost Efficiency
- Scalability and Flexibility
- Increased Developer Productivity
- Reduced Operational Overhead
- Use Cases for Serverless Computing
- Conclusion
------ GENERATED INTRODUCTION ------
In the evolving cloud computing sector, serverless architecture has emerged as a significant shift in how applications are built and deployed. By abstracting away the underlying infrastructure, serverless computing allows developers to focus solely on writing code, leading to increased productivity and innovation. This article looks at the benefits of adopting a serverless approach, from cost savings and automatic scalability to reduced operational burdens. We will examine how this model enhances developer productivity and look at practical use cases where serverless shines, providing a clear picture of why it is becoming a preferred choice for modern application development.
As you can see, the pipeline successfully executed both steps. The intro_chain effectively used the context provided by the outline_chain to generate a relevant and well-structured introduction. This simple two-step process demonstrates the capability of LCEL for creating sophisticated, automated workflows. You could extend this pipeline further by adding more chains to write each section of the blog post based on the outline, creating a complete content draft from a single topic input.
Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
© 2026 ApX Machine LearningEngineered with