Having explored several advanced prompting strategies theoretically, let's put them into practice. This section provides hands-on exercises where you'll apply few-shot prompting, role prompting, structured output requests, and chain-of-thought to guide Large Language Model (LLM) behavior more effectively.For these exercises, you'll need access to an LLM, either through a web interface (like a playground) or programmatically via an API. The focus here is on crafting the prompt itself; the specific API call details are less important for this exercise but were covered in Chapter 1 and will be detailed further in Chapter 4.Exercise 1: Improving Classification with Few-Shot PromptingZero-shot prompting relies on the LLM's general knowledge. While powerful, it can sometimes be ambiguous for specific classification tasks. Few-shot prompting provides examples within the prompt to clarify your intent.Task: Classify customer feedback into Positive, Negative, or Neutral.1. Zero-Shot Attempt:Consider this simple prompt:Classify the sentiment of the following customer feedback: Feedback: "The user interface is quite confusing." Sentiment:The LLM might correctly output Negative. However, for more ambiguous feedback, it might struggle.2. Few-Shot Prompt Construction:Now, let's provide examples (shots) to guide the model:Classify the sentiment of the customer feedback into Positive, Negative, or Neutral. Feedback: "I love the new features, they work great!" Sentiment: Positive Feedback: "The documentation is okay, but could be clearer." Sentiment: Neutral Feedback: "The app crashes every time I try to save." Sentiment: Negative Feedback: "The user interface is quite confusing." Sentiment:Your Turn:Send this few-shot prompt to an LLM. Observe the output. Is it consistently Negative?Try classifying a new piece of feedback using the same few-shot prompt, for example: "Response times are acceptable."Experiment by changing the examples or the number of shots ($k$). How does this affect the classification of ambiguous feedback like "It works."?Few-shot learning significantly improves reliability for specific, customized tasks by providing concrete examples of the desired input-output pattern.Exercise 2: Combining Role Prompting and Structured OutputSometimes you need the LLM to adopt a specific persona and format its output in a structured way, like JSON, for easier programmatic use.Task: Generate a short, factual summary of a historical event, acting as a historian, and output it as a JSON object.1. Basic Prompt (Potential Issues):Summarize the moon landing.This might produce a good summary, but the format is unpredictable (plain text, paragraphs, bullet points), and the tone might vary.2. Role and Structure Prompt:Let's instruct the LLM on both who it should be and how it should respond.Act as a neutral historian. Provide a concise summary of the Apollo 11 moon landing. Format the output as a JSON object with the following keys: "event_name", "date", "key_figures", "brief_summary". Example Output Format: { "event_name": "...", "date": "...", "key_figures": ["...", "..."], "brief_summary": "..." } Provide the summary for the Apollo 11 moon landing:Your Turn:Send this detailed prompt to an LLM.Did the LLM adopt the historian persona (neutral, factual tone)?Did it generate valid JSON matching the requested structure?Try asking for a summary of a different event using the same prompt structure. How is the formatting?This combination is very useful for integrating LLM outputs into applications that expect predictable data structures.Exercise 3: Encouraging Reasoning with Chain-of-Thought (CoT)For problems requiring multiple steps of reasoning, simply asking for the answer can lead the LLM to guess or make logical leaps. CoT prompting encourages the model to show its work, often improving accuracy.Task: Solve a simple multi-step word problem.Problem: A grocery store starts with 50 apples. They sell 15 apples in the morning and receive a shipment of 30 more apples in the afternoon. How many apples do they have at the end of the day?1. Direct Prompt:A grocery store starts with 50 apples. They sell 15 apples in the morning and receive a shipment of 30 more apples in the afternoon. How many apples do they have at the end of the day?For simple problems like this, many LLMs will get it right. However, for more complex logic, they might fail.2. Chain-of-Thought Prompt:Let's explicitly ask the LLM to reason step-by-step.A grocery store starts with 50 apples. They sell 15 apples in the morning and receive a shipment of 30 more apples in the afternoon. How many apples do they have at the end of the day? Let's think step by step: 1. Start with the initial number of apples. 2. Account for the apples sold. 3. Account for the apples received. 4. Calculate the final number.Alternatively, you can use a few-shot approach demonstrating CoT:Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? A: Let's think step by step. Roger started with 5 balls. He bought 2 cans, each with 3 balls, so he bought 2 * 3 = 6 balls. In total, he now has 5 + 6 = 11 balls. The answer is 11. Q: A grocery store starts with 50 apples. They sell 15 apples in the morning and receive a shipment of 30 more apples in the afternoon. How many apples do they have at the end of the day? A: Let's think step by step.Your Turn:Send the direct prompt and the CoT prompt (either version) to your LLM.Compare the outputs. Does the CoT prompt produce a clear reasoning path?Is the final answer correct in both cases? (For this simple problem, it likely is).Try a slightly more complex problem. Does CoT help maintain accuracy where the direct prompt might fail? Example: "A train travels 120 km in 2 hours. It then travels another 150 km at a speed of 50 km/h. What is the total time taken for the entire trip?"CoT is particularly effective for arithmetic, commonsense, and symbolic reasoning tasks.Experimentation and Next StepsThese exercises demonstrate the practical application of advanced prompting techniques. Don't hesitate to experiment further:Combine techniques: Use role prompting with few-shot examples and CoT instructions.Refine instructions: Make your instructions even clearer or more specific.Vary parameters: Adjust temperature or other generation parameters alongside these prompt structures (as discussed in Chapter 1).Mastering these strategies provides you with a powerful toolkit for directing LLM behavior, moving to build more complex and reliable applications. The next chapter examines the systematic process of designing, testing, and refining prompts.