Large Language Models (LLMs) possess impressive capabilities, but they often perform better when shown how to complete a task rather than just being told what to do. This is where few-shot prompting comes into play. Unlike zero-shot prompting, which provides only the instruction, few-shot prompting includes several examples (shots) of the task being performed directly within the prompt itself.
Think of it as giving the model a mini-tutorial right before asking it to solve a new problem. These examples act as demonstrations, guiding the model towards the desired output format, style, or reasoning process. This technique is particularly useful for tasks where the desired output structure is specific or when the task itself requires a pattern that isn't immediately obvious from the instruction alone.
Let's clarify the distinction:
Translate to French: Hello world
Translate to French:
sea otter => loutre de mer
Hello world =>
Translate to French:
sea otter => loutre de mer
cheese => fromage
blue sky => ciel bleu
Hello world =>
Providing multiple examples offers several advantages:
In Python, you can construct few-shot prompts using simple string formatting or more structured approaches with libraries like LangChain.
You can use f-strings or str.format()
to build the prompt dynamically.
# Examples of input/output pairs
examples = [
{"input": "A friendly water mammal.", "output": "sea otter"},
{"input": "A dairy product made from milk.", "output": "cheese"},
{"input": "The color of the atmosphere on a clear day.", "output": "blue sky"}
]
# The new input we want the model to process
new_input = "A large, gray animal with a trunk."
# Construct the prompt string
prompt_parts = ["Identify the object from the description.\n"]
for example in examples:
prompt_parts.append(f"Description: {example['input']}")
prompt_parts.append(f"Object: {example['output']}\n") # Add a newline for separation
# Add the final input
prompt_parts.append(f"Description: {new_input}")
prompt_parts.append("Object:") # Prompt the model for the final output
final_prompt = "\n".join(prompt_parts)
print(final_prompt)
# Expected Output:
# Identify the object from the description.
#
# Description: A friendly water mammal.
# Object: sea otter
#
# Description: A dairy product made from milk.
# Object: cheese
#
# Description: The color of the atmosphere on a clear day.
# Object: blue sky
#
# Description: A large, gray animal with a trunk.
# Object:
This final_prompt
string would then be sent to the LLM API. The model sees the pattern in the examples and is likely to output "elephant".
Frameworks like LangChain provide more robust ways to manage prompts, especially few-shot prompts. You can use classes like FewShotPromptTemplate
. (We explored PromptTemplate
in Chapter 4; FewShotPromptTemplate
builds upon that).
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
# Define the examples
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
]
# Define the template for how each example should be formatted
example_prompt = PromptTemplate.from_template("Input: {input}\nOutput: {output}")
# Define the overall few-shot prompt template
few_shot_prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix="Give the antonym of the input word.",
suffix="Input: {user_input}\nOutput:", # {user_input} is the variable for the final input
input_variables=["user_input"], # Specifies the variable name for the final input
example_separator="\n\n" # Separator between examples
)
# Format the prompt for a new input
final_prompt = few_shot_prompt.format(user_input="hot")
print(final_prompt)
# Expected Output:
# Give the antonym of the input word.
#
# Input: happy
# Output: sad
#
# Input: tall
# Output: short
#
# Input: hot
# Output:
Using FewShotPromptTemplate
makes managing examples, formatting, and the overall prompt structure cleaner, especially as prompts become more complex.
The quality of your examples is significant. Poor examples can confuse the model or lead it to replicate errors. Keep these points in mind:
Few-shot prompting is particularly effective in scenarios such as:
While powerful, few-shot prompting has aspects to consider:
Few-shot prompting is a fundamental technique in practical prompt engineering. By providing concrete examples within the prompt, you significantly enhance your ability to guide LLM behavior and achieve more reliable, accurate, and correctly formatted results, especially when implementing these prompts programmatically using Python.
© 2025 ApX Machine Learning