While static prompts are a starting point, many real-world applications require prompts that adapt to changing circumstances, user inputs, or retrieved data. Hardcoding every possible prompt variation is impractical and inflexible. This is where leveraging Python's capabilities to generate prompts dynamically becomes essential. Dynamic prompt generation allows you to construct tailored instructions for the LLM on the fly, leading to more relevant, personalized, and effective interactions.
Imagine needing to generate a summary of a news article, but also wanting to ask the LLM to adopt a specific tone (formal, informal, optimistic) based on user preference. Or consider a chatbot that needs to incorporate the user's name and previous conversation turns into its next prompt. These scenarios demand prompts that are assembled just before being sent to the model, incorporating specific, up-to-date information.
Python offers several straightforward ways to build prompts dynamically.
Python's f-strings (formatted string literals) are often the most direct and readable way to embed variables and expressions into your prompt text. They allow you to create templates where placeholders are filled in at runtime.
user_name = "Alex"
topic = "the future of renewable energy"
difficulty = "beginner"
# Using f-strings to insert variables
prompt = f"""
Explain {topic} to me.
Assume I am a {difficulty} and my name is {user_name}.
Keep the explanation concise and focus on the main benefits.
"""
print(prompt)
This code generates the following prompt string:
Explain the future of renewable energy to me.
Assume I am a beginner and my name is Alex.
Keep the explanation concise and focus on the main benefits.
You can embed any valid Python expression within the curly braces {}
inside an f-string, making it a powerful tool for simple dynamic adjustments.
Often, you'll want to change parts of the prompt based on certain conditions. Standard Python if/elif/else
statements are perfect for this. You can construct different prompt fragments and combine them based on application logic.
user_skill_level = "expert" # Could come from user profile
query = "Explain the transformer architecture."
prompt_base = f"Question: {query}\nAnswer:"
prompt_suffix = "" # Initialize empty suffix
if user_skill_level == "beginner":
prompt_suffix = "\nExplain it simply, avoiding technical jargon."
elif user_skill_level == "intermediate":
prompt_suffix = "\nAssume some familiarity with basic machine learning concepts."
elif user_skill_level == "expert":
prompt_suffix = "\nFeel free to include technical details and mathematical notation."
final_prompt = prompt_base + prompt_suffix
print(final_prompt)
Output for user_skill_level = "expert"
:
Question: Explain the transformer architecture.
Answer:
Feel free to include technical details and mathematical notation.
This approach allows for significant structural changes to the prompt based on runtime conditions.
When you need to include multiple pieces of data, such as a list of items, user reviews, or search results, Python's for
loops are indispensable. You can iterate through the data and format it appropriately within the prompt.
product_name = "Quantum Leap Laptop"
reviews = [
"Amazing speed, boots up in seconds!",
"Battery life could be better, but overall solid.",
"A bit pricey, but the performance justifies it.",
"Screen resolution is fantastic for graphic design work."
]
# Format reviews into a numbered list for the prompt
formatted_reviews = "\n".join([f"{i+1}. {review}" for i, review in enumerate(reviews)])
prompt = f"""
Summarize the key pros and cons for the product "{product_name}" based on these user reviews:
{formatted_reviews}
Provide the summary as bullet points under 'Pros' and 'Cons'.
"""
print(prompt)
This generates:
Summarize the key pros and cons for the product "Quantum Leap Laptop" based on these user reviews:
1. Amazing speed, boots up in seconds!
2. Battery life could be better, but overall solid.
3. A bit pricey, but the performance justifies it.
4. Screen resolution is fantastic for graphic design work.
Provide the summary as bullet points under 'Pros' and 'Cons'.
Libraries like LangChain are designed with dynamic prompts in mind. Their PromptTemplate
objects explicitly define input variables, making dynamic generation clean and structured. You learned about PromptTemplate
in Chapter 4; here's how you integrate dynamic data:
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI # Example LLM integration
# Assume llm is initialized: llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
template_string = """
Generate a short product description for a product with the following features.
Product Name: {product_name}
Category: {category}
Key Features:
{features_list}
Target Audience: {audience}
Tone: {tone}
"""
prompt_template = PromptTemplate(
input_variables=["product_name", "category", "features_list", "audience", "tone"],
template=template_string
)
# Dynamic data
product_data = {
"product_name": "EcoGrow Smart Garden",
"category": "Home & Garden",
"features": ["Automated watering", "LED grow lights", "App connectivity", "Soil sensors"],
"audience": "Urban dwellers with limited space",
"tone": "Enthusiastic and eco-conscious"
}
# Format the features list using a loop before passing to the template
formatted_features = "\n".join([f"- {feature}" for feature in product_data["features"]])
# Generate the final prompt using the template and dynamic data
final_prompt_string = prompt_template.format(
product_name=product_data["product_name"],
category=product_data["category"],
features_list=formatted_features, # Pass the formatted list
audience=product_data["audience"],
tone=product_data["tone"]
)
print("--- Generated Prompt ---")
print(final_prompt_string)
# Example of using this prompt in a simple chain (conceptual)
# chain = LLMChain(llm=llm, prompt=prompt_template)
# response = chain.run(product_name=product_data["product_name"], ...)
# print("\n--- LLM Response (Example) ---")
# print(response) # This part requires API keys and actual execution
This example demonstrates combining Python's string formatting (within the loop for features_list
) with LangChain's PromptTemplate
for structured dynamic prompt creation. The PromptTemplate
clearly defines the expected inputs, and Python code prepares the data before filling the template.
Diagram illustrating the flow where application logic uses Python to combine data and a template into a final prompt for the LLM.
By mastering dynamic prompt generation with Python, you gain significant control over LLM interactions, enabling you to build more sophisticated, adaptive, and user-centric applications. It transforms the prompt from a static instruction into a flexible communication tool shaped by real-time context and data.
© 2025 ApX Machine Learning