While clear instructions are essential for communicating with Large Language Models, sometimes just telling the model what to do isn't enough. Imagine trying to teach someone a new game just by describing the rules versus showing them a few rounds. Often, seeing examples makes the task much clearer. This is where few-shot prompting comes in.
Instead of relying solely on instructions (which is sometimes called zero-shot prompting), few-shot prompting involves including examples of the task you want the LLM to perform directly within your prompt. You show the model the pattern, the format, or the type of response you expect. Providing just one example is called one-shot prompting, while providing more than one (typically 2 to 5) is referred to as few-shot prompting.
Providing examples in your prompt offers several advantages:
A typical few-shot prompt includes:
It's important to format your examples consistently so the model can easily recognize the pattern. Common formats include using labels like Input:
and Output:
, or Q:
and A:
, or simply demonstrating the transformation.
Let's look at a simple example: classifying the sentiment of movie reviews.
Zero-Shot Prompt (Instruction Only):
Classify the sentiment of the following movie review as Positive, Negative, or Neutral.
Review: This movie was absolutely fantastic, a must-see!
Sentiment:
The model might correctly identify this as Positive. But for more nuanced reviews, or if you have specific ideas about what counts as Neutral, examples help.
Few-Shot Prompt (Instruction and Examples):
Classify the sentiment of the following movie reviews as Positive, Negative, or Neutral.
Review: I loved the acting and the storyline was gripping.
Sentiment: Positive
Review: The plot was predictable and the pacing felt slow.
Sentiment: Negative
Review: It was an okay movie, neither great nor terrible.
Sentiment: Neutral
Review: This movie was absolutely fantastic, a must-see!
Sentiment:
By providing examples, you've given the model a clearer template for the task. It sees the input format (Review:
), the desired output format (Sentiment:
), and examples of how inputs map to outputs according to your definition.
Here's another example: extracting specific information and formatting it as JSON.
Task: Extract the fruit and color from a sentence.
Few-Shot Prompt:
Extract the fruit and its color from the sentence and provide the output in JSON format.
Sentence: I bought a bright red apple.
JSON: {"fruit": "apple", "color": "red"}
Sentence: The recipe calls for a green lime.
JSON: {"fruit": "lime", "color": "green"}
Sentence: He ate a yellow banana for breakfast.
JSON:
The examples clearly demonstrate both the extraction task and the exact JSON structure required ({"fruit": "...", "color": "..."}
). This makes it much more likely the model will produce the correct output for the final sentence compared to just asking it to "Extract the fruit and color as JSON."
Input:
and Output:
for the examples, use Input:
for your final query too.one-shot
or few-shot
) are sufficient. Adding too many examples can sometimes confuse the model or use up too much of its allowed input length (context window). Experiment to find what works best.Few-shot prompting is a fundamental technique for improving the reliability and specificity of LLM responses. By showing the model what you want, in addition to telling it, you provide valuable context that helps bridge the gap between your intention and the model's output. As you practice, you'll develop an intuition for when and how to use examples effectively.
© 2025 ApX Machine Learning