Applications often require dynamic behavior, rather than executing a predefined, linear series of steps. For instance, consider a system designed to answer academic questions. A query about quantum mechanics requires a different context and tone than a query about the Roman Empire. Forcing both through the same generic prompt and model would produce suboptimal results. A mechanism is needed to intelligently route an input to the most appropriate processing path.
This is precisely the problem that a routing workflow is designed to solve. It introduces conditional logic into your workflows, allowing an application to choose one of several possible paths based on the input. Instead of a fixed sequence like g(f(Input)), a router enables a conditional structure, similar to an if-else statement in programming. An LLM is used as the decision-making engine to direct the flow of execution.
A routing workflow is composed of two primary elements:
physics_chain, a history_chain, and a math_chain. The router selects one of these to execute based on its analysis.The entire process works as follows: the user's input is first passed to the routing chain. The routing chain's LLM outputs the name of a destination. The system then uses this name to look up the corresponding destination chain from its collection and executes it with the original input.
An input is first evaluated by a routing LLM, which selects one of several specialized destination chains to generate the final output.
Let's construct a practical example of a routing workflow that sends questions to different experts: a physicist, a mathematician, and a historian.
First, we define the prompts for our destination chains. Each prompt primes the LLM to adopt a specific persona.
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Assume OPENAI_API_KEY is set in your environment
# os.environ["OPENAI_API_KEY"] = "your-api-key"
llm = ChatOpenAI(temperature=0, model="gpt-4o")
physics_template = """You are a very smart physics professor.
You are great at answering questions about physics in a concise and easy to understand manner.
When you don't know the answer to a question you admit that you don't know.
Here is a question:
{input}"""
math_template = """You are a very good mathematician. You are great at answering math questions.
You are so good because you are able to break down hard problems into their component parts,
answer the component parts, and then put them together to answer the broader question.
Here is a question:
{input}"""
history_template = """You are a very good historian. You have an excellent knowledge of and memory for history.
You have a particular skill in answering questions by telling a story.
Here is a question:
{input}"""
prompt_infos = [
{
"name": "physics",
"description": "Good for answering questions about physics",
"prompt_template": physics_template,
},
{
"name": "math",
"description": "Good for answering math questions",
"prompt_template": math_template,
},
{
"name": "history",
"description": "Good for answering history questions",
"prompt_template": history_template,
},
]
Next, we create the destination chains using the pipe | syntax. Each chain consists of a prompt, the LLM, and an output parser.
destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
prompt_template = p_info["prompt_template"]
prompt = PromptTemplate.from_template(template=prompt_template)
chain = prompt | llm | StrOutputParser()
destination_chains[name] = chain
Now for the core routing logic. We define a prompt that instructs the LLM to act as a classifier. It should output only the name of the category that best matches the input.
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
router_template = """Given the user input below, classify it as either being about {options}.
Description of choices:
{destinations}
Return the name of the choice, and nothing else.
Input:
{input}
"""
router_prompt = PromptTemplate.from_template(router_template)
router_prompt = router_prompt.partial(destinations=destinations_str, options=", ".join([p['name'] for p in prompt_infos]))
router_chain = router_prompt | llm | StrOutputParser()
Finally, we assemble everything using RunnableLambda. We define a function route that takes the output of the router chain (the category name) and returns the corresponding destination chain.
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
def route(info):
# The router_chain output is passed here via info["destination"]
destination = info["destination"].strip().lower()
# Select the chain from our dictionary
if destination in destination_chains:
return destination_chains[destination]
else:
# Fallback to physics if unclear for now
return destination_chains["physics"]
# We build the full chain:
# 1. Run the router and pass the input through
# 2. Use the route function to select the next chain
chain = {
"destination": router_chain,
"input": RunnablePassthrough()
} | RunnableLambda(route)
Let's test it with a few different inputs.
# Test with a physics question
response_physics = chain.invoke("What is the formula for gravitational potential energy?")
print(f"Physics Response: {response_physics}")
# Test with a history question
response_history = chain.invoke("When was the Battle of Hastings?")
print(f"History Response: {response_history}")
When you run the physics question, the router correctly identifies the physics category, and the output is generated by the specialized physics prompt:
Physics Response: The formula for gravitational potential energy (U) near the surface of a planet is:
U = mgh
where:
- m is the mass of the object.
- g is the acceleration due to gravity.
- h is the height of the object relative to a reference point.
Similarly, the history question is correctly routed to the history chain. This dynamic routing allows the application to use specialized prompts, significantly improving the quality and relevance of its responses.
A well-designed system must gracefully handle ambiguity. What if a user asks a question that does not fit neatly into any of the predefined categories, such as "What is the history of mathematics?" The router might struggle to make a clear choice.
To account for this, we can define a default chain and update our routing logic to fallback to it whenever the router's output does not match a known destination.
Here is how you would add a default chain to our example:
# Create a generic chain as a fallback
default_prompt = PromptTemplate.from_template("{input}")
default_chain = default_prompt | llm | StrOutputParser()
def route(info):
destination = info["destination"].strip().lower()
# Use the .get() method to provide a default fallback
return destination_chains.get(destination, default_chain)
# Re-initialize the full chain with the updated routing logic
chain = {
"destination": router_chain,
"input": RunnablePassthrough()
} | RunnableLambda(route)
With this addition, any input that the router cannot confidently classify (or if it outputs an unknown category) will be handled by the default_chain, making the application more consistent. This pattern provides a powerful way to build more sophisticated and intelligent applications by moving past simple linear sequences and incorporating conditional, logic-driven workflows.
Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
© 2026 ApX Machine LearningEngineered with