Having explored several advanced prompting strategies theoretically, let's put them into practice. This section provides hands-on exercises where you'll apply few-shot prompting, role prompting, structured output requests, and chain-of-thought to guide Large Language Model (LLM) behavior more effectively.
For these exercises, you'll need access to an LLM, either through a web interface (like a playground) or programmatically via an API. The focus here is on crafting the prompt itself; the specific API call details are less important for this exercise but were covered conceptually in Chapter 1 and will be detailed further in Chapter 4.
Zero-shot prompting relies on the LLM's general knowledge. While powerful, it can sometimes be ambiguous for specific classification tasks. Few-shot prompting provides examples within the prompt to clarify your intent.
Task: Classify customer feedback into Positive
, Negative
, or Neutral
.
1. Zero-Shot Attempt:
Consider this simple prompt:
Classify the sentiment of the following customer feedback:
Feedback: "The user interface is quite confusing."
Sentiment:
The LLM might correctly output Negative
. However, for more ambiguous feedback, it might struggle.
2. Few-Shot Prompt Construction:
Now, let's provide examples (shots) to guide the model:
Classify the sentiment of the customer feedback into Positive, Negative, or Neutral.
Feedback: "I love the new features, they work great!"
Sentiment: Positive
Feedback: "The documentation is okay, but could be clearer."
Sentiment: Neutral
Feedback: "The app crashes every time I try to save."
Sentiment: Negative
Feedback: "The user interface is quite confusing."
Sentiment:
Your Turn:
Negative
?"Response times are acceptable."
"It works."
?Few-shot learning significantly improves reliability for specific, customized tasks by providing concrete examples of the desired input-output pattern.
Sometimes you need the LLM to adopt a specific persona and format its output in a structured way, like JSON, for easier programmatic use.
Task: Generate a short, factual summary of a historical event, acting as a historian, and output it as a JSON object.
1. Basic Prompt (Potential Issues):
Summarize the moon landing.
This might produce a good summary, but the format is unpredictable (plain text, paragraphs, bullet points), and the tone might vary.
2. Role and Structure Prompt:
Let's instruct the LLM on both who it should be and how it should respond.
Act as a neutral historian. Provide a concise summary of the Apollo 11 moon landing.
Format the output as a JSON object with the following keys: "event_name", "date", "key_figures", "brief_summary".
Example Output Format:
{
"event_name": "...",
"date": "...",
"key_figures": ["...", "..."],
"brief_summary": "..."
}
Provide the summary for the Apollo 11 moon landing:
Your Turn:
This combination is very useful for integrating LLM outputs into applications that expect predictable data structures.
For problems requiring multiple steps of reasoning, simply asking for the answer can lead the LLM to guess or make logical leaps. CoT prompting encourages the model to show its work, often improving accuracy.
Task: Solve a simple multi-step word problem.
Problem: A grocery store starts with 50 apples. They sell 15 apples in the morning and receive a shipment of 30 more apples in the afternoon. How many apples do they have at the end of the day?
1. Direct Prompt:
A grocery store starts with 50 apples. They sell 15 apples in the morning and receive a shipment of 30 more apples in the afternoon. How many apples do they have at the end of the day?
For simple problems like this, many LLMs will get it right. However, for more complex logic, they might fail.
2. Chain-of-Thought Prompt:
Let's explicitly ask the LLM to reason step-by-step.
A grocery store starts with 50 apples. They sell 15 apples in the morning and receive a shipment of 30 more apples in the afternoon. How many apples do they have at the end of the day?
Let's think step by step:
1. Start with the initial number of apples.
2. Account for the apples sold.
3. Account for the apples received.
4. Calculate the final number.
Alternatively, you can use a few-shot approach demonstrating CoT:
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
A: Let's think step by step. Roger started with 5 balls. He bought 2 cans, each with 3 balls, so he bought 2 * 3 = 6 balls. In total, he now has 5 + 6 = 11 balls. The answer is 11.
Q: A grocery store starts with 50 apples. They sell 15 apples in the morning and receive a shipment of 30 more apples in the afternoon. How many apples do they have at the end of the day?
A: Let's think step by step.
Your Turn:
CoT is particularly effective for arithmetic, commonsense, and symbolic reasoning tasks.
These exercises demonstrate the practical application of advanced prompting techniques. Don't hesitate to experiment further:
Mastering these strategies provides you with a powerful toolkit for directing LLM behavior, moving beyond simple Q&A to build more complex and reliable applications. The next chapter delves into the systematic process of designing, testing, and refining prompts.
© 2025 ApX Machine Learning