Prompt templates manage the structure of an LLM's input. Few-shot prompting, on the other hand, provides control over its behavior. Instead of just telling the model what to do, you show it. This technique, also known as in-context learning, significantly improves a model's performance on specific tasks by providing examples of the desired input-output pattern directly within the prompt.
Few-shot prompting is particularly effective for guiding the model's output format, tone, and reasoning process without the need for expensive fine-tuning.
At its core, a few-shot example is a pair of an input and its corresponding desired output. These examples serve as a guide for the model when it processes a new, unseen input. In the toolkit, you can structure these pairs using the create_example function.
Let's consider a sentiment classification task. An example would consist of a sample text and its correct sentiment label.
from kerb.prompt import create_example
# Create examples for a sentiment classification task
ex_positive = create_example(
input_text="The product exceeded my expectations!",
output_text="positive",
)
ex_negative = create_example(
input_text="Terrible customer service, very disappointed.",
output_text="negative",
)
Each object created by create_example encapsulates one complete demonstration for the model.
For any given task, you will likely have dozens or even hundreds of high-quality examples. Since you can only fit a few into a single prompt, you need a way to manage them and select the most relevant ones for a given query. The ExampleSelector is designed for this purpose. It acts as a repository, or "bank," for all your potential few-shot examples.
You can populate an ExampleSelector with the examples you've created.
from kerb.prompt import ExampleSelector, create_example
# Create a selector to manage our examples
sentiment_selector = ExampleSelector()
# Add examples to the selector
examples_data = [
("The movie was fantastic!", "positive"),
("Worst purchase ever made.", "negative"),
("Average quality, decent price.", "neutral"),
("Absolutely love this product!", "positive"),
("Not worth the money.", "negative"),
]
for inp, out in examples_data:
sentiment_selector.add(create_example(input_text=inp, output_text=out))
With your examples organized, you can now employ different strategies to select the most effective subset for your prompt.
Not all examples are equally useful for every query. The select method on the ExampleSelector allows you to choose a selection strategy that best fits your needs.
Randomly selecting examples is a good default strategy to prevent the model from overfitting to a specific order or pattern in your example bank. It introduces variety and helps the model generalize better.
# Select 3 random examples from the bank
selected_examples = sentiment_selector.select(k=3, strategy="random")
for ex in selected_examples:
print(f"'{ex.input}' -> '{ex.output}'")
Sometimes, you want to ensure the selected examples cover a range of patterns. The diverse strategy selects examples that are different from each other, maximizing the variety of information presented to the model. This is useful for showing the model how to handle different types of inputs.
# Select 4 examples that are as different as possible
diverse_examples = sentiment_selector.select(k=4, strategy="diverse")
How Diversity is Calculated
The diversity-based selection uses a simple embedding-based algorithm to find examples that are distant from each other in vector space. This ensures the selected examples are semantically distinct, providing a broader context for the LLM.
The most sophisticated strategy is semantic selection. It selects examples that are most similar in meaning to the current user query. This is an extremely powerful technique for creating dynamic, context-aware prompts. For instance, if a user asks a question about Python lists, the selector will find and include few-shot examples that also deal with list manipulation.
This strategy requires an embedding model to calculate semantic similarity. If the necessary modules are not available, it gracefully falls back to a different strategy like random.
# Example of how semantic selection would be used
query = "How do I iterate over a list?"
# The selector would find examples related to loops or lists
# try:
# semantic_examples = code_selector.select(
# k=2, strategy="semantic", query=query
# )
# except Exception:
# # Fallback if embedding model is not available
# semantic_examples = code_selector.select(k=2, strategy="random")
Once you have selected your examples, you need to format them into a single string that can be inserted into your prompt template. The format_examples function handles this.
You can control the appearance of each example with the template argument and the spacing between them with the separator.
from kerb.prompt import format_examples
# We'll use the randomly selected examples from before
selected_examples = sentiment_selector.select(k=3, strategy="random")
# Format the examples into a string
formatted_text = format_examples(
selected_examples,
template="Input: {input}\nOutput: {output}",
separator="\n\n"
)
print(formatted_text)
This will produce a clean, well-structured block of text ready for the LLM, like this:
Input: The movie was fantastic!
Output: positive
Input: Not worth the money.
Output: negative
Input: Average quality, decent price.
Output: neutral
The final step is to combine your system instruction, the formatted few-shot examples, and the user's new query into a single, complete prompt.
Here is a full workflow for a name extraction task:
from kerb.prompt import (
ExampleSelector,
create_example,
format_examples,
render_template
)
# 1. Create an example bank
name_extractor_bank = ExampleSelector()
training_data = [
("Extract the name: John Smith lives in NYC", "John Smith"),
("The CEO, Michael Brown, announced the merger.", "Michael Brown"),
("Dr. Sarah Johnson is a leading scientist.", "Dr. Sarah Johnson"),
("We spoke with Professor David Lee.", "Professor David Lee"),
]
for inp, out in training_data:
name_extractor_bank.add(create_example(input_text=inp, output_text=out))
# 2. Define the main prompt template
prompt_template = """You are a name extraction system.
Here are some examples of how to extract names:
{examples}
Now, perform the extraction for the following text.
Input: {user_query}
Output:"""
# 3. Handle a new user query
new_query = "Jennifer Martinez is the new director of the project."
# 4. Select and format few-shot examples
selected = name_extractor_bank.select(k=2, strategy="random")
examples_text = format_examples(
selected,
template="Input: {input}\nOutput: {output}",
separator="\n\n"
)
# 5. Render the final prompt
final_prompt = render_template(
prompt_template,
{
"examples": examples_text,
"user_query": new_query
}
)
print(final_prompt)
This structured approach not only improves model accuracy but also makes your prompting logic more modular and maintainable. By separating the examples from the prompt templates, you can update, manage, and test your few-shot data independently of your core prompt logic.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with