Theory is valuable, but practice is where understanding solidifies. Now that you've learned the fundamental concepts of prompts, clear instructions, and providing examples, it's time to interact directly with a Large Language Model. This section provides hands-on exercises to help you craft your first prompts and observe the results.
For these exercises, you'll need access to an LLM. You can use one of the web interfaces or basic API methods that will be discussed in more detail in Chapter 5 ("Using Pre-trained LLMs"). Many free and paid services offer simple chat-like interfaces perfect for getting started. Don't worry about finding the "perfect" LLM right now; the goal is to practice the process of prompting.
Remember, LLMs can sometimes produce unexpected or slightly varied outputs even with the same prompt. Focus on the general structure and intent of the responses rather than precise word-for-word replication.
Let's start with the most basic type of interaction: asking a direct question or giving a simple command.
Exercise 1: Simple Question
What is the main function of a CPU in a computer?
Expected Outcome: The LLM should provide a concise explanation of a CPU's role, likely mentioning executing instructions or performing calculations. Notice how a direct question often yields a direct answer.
Exercise 2: Simple Command
List three primary colors.
Expected Outcome: The model should output a list containing red, yellow, and blue. This demonstrates following a straightforward instruction. Did it format it as a numbered list, bullet points, or just comma-separated text? The format might vary unless specified.
As discussed earlier, clarity is important. Let's try refining an instruction.
Exercise 3: Specifying Format
List the three primary colors as a numbered list.
Expected Outcome: This time, the LLM is more likely to present the colors using numbered points (e.g., 1. Red, 2. Yellow, 3. Blue). This shows how adding specific constraints influences the structure of the response.
LLMs excel at generating text. Let's try a simple creative task and attempt to control the output length.
Exercise 4: Short Sentence Generation
Write one sentence describing a rainy day.
Expected Outcome: The model should generate a single sentence related to rain, perhaps mentioning the sound, the look, or the feeling.
Exercise 5: Expanding the Generation
Write three sentences describing a rainy day.
Expected Outcome: The model should generate a short paragraph, approximately three sentences long, about a rainy day. While LLMs don't always adhere strictly to exact sentence counts, explicitly requesting a number often guides the output length effectively.
Providing examples can significantly guide the model, especially for specific formats or tasks it might not immediately grasp.
Exercise 6: Simple Analogy (Few-Shot)
Complete the analogy:
Dog is to bark as cat is to meow.
Tree is to leaf as flower is to petal.
Sun is to day as moon is to
Expected Outcome: By seeing the pattern (Object : Related Part/Concept), the LLM is guided to complete the final analogy correctly, likely outputting "night". This demonstrates how a simple example (or two) sets the context for the desired task.
Go through these exercises, but don't stop there. Try modifying the prompts slightly:
Observe how these changes affect the LLM's responses. Note when the model follows your instructions well and when it seems to misunderstand or ignore parts of the prompt. This experimentation is fundamental to developing effective prompting skills. You are learning how to communicate your intent to the model through the text you provide. Keep practicing!
© 2025 ApX Machine Learning