While a well-structured set of instructions can guide a model effectively, some tasks require more than just instructions. They benefit from demonstration. For instance, when you need the model to follow a very specific output format or perform a classification task with unique categories, providing examples directly within the prompt can dramatically improve performance. This technique is known as few-shot prompting.
Few-shot prompting operates on the principle of in-context learning. By including a handful of input-output examples, or "shots," you are conditioning the model to recognize a pattern. The model then applies this learned pattern to the new input you provide, leading to more accurate and consistent results.
The most direct way to implement few-shot prompting is to embed the examples directly into your prompt string. This approach works well when you have a small, fixed set of examples that are universally applicable to your task.
Let's consider a sentiment analysis task where we want to classify text into one of three categories: Positive, Negative, or Neutral.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
# Assume llm is initialized, e.g., llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt_with_examples = PromptTemplate(
input_variables=["input"],
template="""
Classify the sentiment of the following text.
Example 1:
Text: "I'm so excited for the new product launch! It's going to be amazing."
Sentiment: Positive
Example 2:
Text: "The delivery was delayed and the item arrived damaged."
Sentiment: Negative
Example 3:
Text: "The system is functioning as expected."
Sentiment: Neutral
Now, classify this text:
Text: "{input}"
Sentiment:
"""
)
# Create the chain
chain = prompt_with_examples | llm
# Run the chain with a new input
response = chain.invoke({"input": "I'm not sure how I feel about the new update."})
print(response.content)
Neutral
In this example, the model learns the expected Text -> Sentiment format and the characteristics of each category from the three shots provided. This hardcoded method is simple and effective for stable requirements but lacks flexibility. If you have many examples or need to select different ones for different inputs, this approach becomes impractical.
LangChain provides a more scalable and dynamic solution with the FewShotPromptTemplate. This class constructs a prompt from a set of examples, formatting them according to a specified template. This separation of logic allows you to manage your examples independently from the main prompt structure.
The FewShotPromptTemplate requires a few main components:
examples: A list of dictionaries containing your example data.example_prompt: A PromptTemplate that defines how each example in the examples list should be formatted.prefix: Text that appears before the formatted examples.suffix: Text that appears after the formatted examples, which typically includes the final input variable.Let's rebuild our sentiment classifier using this more structured approach.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate, FewShotPromptTemplate
# 1. Define the list of examples
examples = [
{
"text": "The new feature is incredibly intuitive and has improved my workflow.",
"sentiment": "Positive"
},
{
"text": "I've been on hold for over an hour. This is unacceptable.",
"sentiment": "Negative"
},
{
"text": "The package arrived on the scheduled day.",
"sentiment": "Neutral"
},
]
# 2. Create a template to format each example
example_template = """
Text: "{text}"
Sentiment: "{sentiment}"
"""
example_prompt = PromptTemplate(
input_variables=["text", "sentiment"],
template=example_template
)
# 3. Assemble the FewShotPromptTemplate
few_shot_prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix="Classify the sentiment of the following text based on the examples.",
suffix='Text: "{input}"\nSentiment:',
input_variables=["input"],
example_separator="\n\n"
)
# Assume llm is initialized
# llm = ChatOpenAI(model="gpt-4o", temperature=0)
chain = few_shot_prompt | llm
# Run the chain
response = chain.invoke({"input": "The documentation is clear, but I found a small typo."})
print(response.content)
Neutral
This method is cleaner and more maintainable. Your examples are stored as a structured list of dictionaries, which can be easily loaded from a file or database, and the logic for formatting them is separate from the main prompt instructions.
What if you have hundreds or thousands of examples? Including them all would exceed the LLM's context window limit and be inefficient. A better strategy is to select only the most relevant examples for a given input. LangChain's ExampleSelector objects are designed for this purpose.
One of the most effective selectors is the SemanticSimilarityExampleSelector. It finds examples that are semantically closest to the user's input. This is done by embedding all examples and the input into a vector space and then performing a similarity search.
To use SemanticSimilarityExampleSelector, you need three things:
examples.The following diagram illustrates how this selector works to dynamically construct the prompt.
Dynamic example selection process using semantic similarity.
Let's implement this for a slightly more complex task: classifying user support tickets. By selecting examples of similar past tickets, we can help the model classify the new one more accurately.
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
from langchain_core.example_selectors import SemanticSimilarityExampleSelector
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings
# Example support tickets
examples = [
{"ticket": "My account is locked and I can't log in.", "category": "Account Access"},
{"ticket": "How do I reset my password?", "category": "Account Access"},
{"ticket": "The app crashed after the latest update on my phone.", "category": "Technical Issue"},
{"ticket": "I am getting a 'connection error' message.", "category": "Technical Issue"},
{"ticket": "I was charged twice for my subscription this month.", "category": "Billing"},
{"ticket": "Can I get a refund for my last purchase?", "category": "Billing"},
{"ticket": "What are your business hours?", "category": "General Inquiry"},
]
# The template for formatting each example
example_prompt = PromptTemplate(
input_variables=["ticket", "category"],
template="Ticket: {ticket}\nCategory: {category}"
)
# Initialize the Example Selector
example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,
OpenAIEmbeddings(), # The embedding model
Chroma, # The vector store
k=2 # Number of examples to select
)
# Create the FewShotPromptTemplate using the selector
similar_prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Classify the support ticket based on similar past tickets.",
suffix="Ticket: {input}\nCategory:",
input_variables=["input"]
)
# Test the selector
new_ticket = "I can't find the invoice for my last payment."
print(similar_prompt.format(input=new_ticket))
When you run this code, the example_selector will find that the new ticket about an "invoice" is most similar to the examples related to "Billing". The output prompt will therefore only include those two examples, making it highly context-aware:
Classify the support ticket based on similar past tickets.
Ticket: I was charged twice for my subscription this month.
Category: Billing
Ticket: Can I get a refund for my last purchase?
Category: Billing
Ticket: I can't find the invoice for my last payment.
Category:
This dynamic selection makes your application more efficient. It provides the model with the most relevant context for each specific input, which is a significant step up from static, hardcoded examples. By mastering few-shot prompting, you can better control model behavior and prepare it to produce structured outputs, a topic we will address next with Output Parsers.
Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
FewShotPromptTemplate and ExampleSelector for dynamic few-shot prompting.SemanticSimilarityExampleSelector.© 2026 ApX Machine LearningEngineered with