Now that you understand the basic building blocks of prompts and the influence of generation parameters, it's time to put theory into practice. This hands-on section guides you through experimenting with simple prompts and observing how Large Language Models (LLMs) respond to different instructions and settings.
Before starting, ensure you have access to an LLM, either through a playground interface or programmatically via an API. If using an API, you'll need:
requests
or the provider's specific SDK like openai
).For code examples, we'll assume a helper function call_llm(prompt, temperature=0.7, max_tokens=100)
exists, which handles the API call and returns the LLM's text response. You'll need to adapt this to your specific API provider and setup. Remember to handle your API keys securely, perhaps using environment variables.
Let's start with a straightforward task: generating creative text based on simple instructions.
Task: Ask the LLM to write a short tagline for a new brand of eco-friendly coffee.
Prompt:
Write a short, catchy tagline for a new brand of sustainable, bird-friendly coffee beans called 'Wing & Bean'.
Example Code:
prompt_v1 = """
Write a short, catchy tagline for a new brand of sustainable, bird-friendly coffee beans called 'Wing & Bean'.
"""
response_v1 = call_llm(prompt=prompt_v1, temperature=0.7, max_tokens=50)
print(f"Response 1:\n{response_v1}")
# Now, let's add a constraint: focus on the taste.
prompt_v2 = """
Write a short, catchy tagline for a new brand of sustainable, bird-friendly coffee beans called 'Wing & Bean'. Focus on the rich taste.
"""
response_v2 = call_llm(prompt=prompt_v2, temperature=0.7, max_tokens=50)
print(f"\nResponse 2 (Taste Focused):\n{response_v2}")
Expected Observation:
The first response might focus more broadly on sustainability or the bird-friendly aspect. The second response, guided by the additional instruction "Focus on the rich taste," should yield taglines emphasizing flavor. This demonstrates how adding specific constraints or details to your instruction influences the output.
Try This: Modify the prompt further. Ask for taglines that rhyme, or taglines under five words. Observe how the LLM adapts to these new instructions.
LLMs are often used for summarizing text. Let's test this capability.
Task: Summarize a paragraph about the benefits of remote work.
Input Text:
Remote work offers significant advantages for both employees and employers. Employees gain flexibility in their schedules, eliminate commuting time and costs, and often report a better work-life balance. Employers can access a wider talent pool, potentially reduce office overhead expenses, and may see increased productivity from focused employees. However, challenges like maintaining company culture and ensuring effective communication need careful management.
Prompt:
Summarize the following text about remote work in a single sentence:
Remote work offers significant advantages for both employees and employers. Employees gain flexibility in their schedules, eliminate commuting time and costs, and often report a better work-life balance. Employers can access a wider talent pool, potentially reduce office overhead expenses, and may see increased productivity from focused employees. However, challenges like maintaining company culture and ensuring effective communication need careful management.
Example Code:
input_text = """
Remote work offers significant advantages for both employees and employers. Employees gain flexibility in their schedules, eliminate commuting time and costs, and often report a better work-life balance. Employers can access a wider talent pool, potentially reduce office overhead expenses, and may see increased productivity from focused employees. However, challenges like maintaining company culture and ensuring effective communication need careful management.
"""
prompt = f"Summarize the following text about remote work in a single sentence:\n\n{input_text}"
response = call_llm(prompt=prompt, temperature=0.5, max_tokens=60)
print(f"Summary:\n{response}")
Expected Observation:
The LLM should provide a concise summary capturing the main points of the original text, ideally in one sentence as requested.
Try This: Change the prompt to ask for a three-sentence summary. Ask it to summarize specifically for an audience of CEOs. Notice how the length and potentially the focus of the summary change based on the instructions.
As discussed earlier, the temperature
parameter controls the randomness or creativity of the output. A lower temperature makes the output more focused and deterministic, while a higher temperature leads to more diverse and sometimes unexpected results.
Task: Generate ideas for a fantasy story plot.
Prompt:
Brainstorm three plot ideas for a fantasy story involving a lost map and a hidden city.
Example Code (Comparing Temperatures):
prompt = "Brainstorm three plot ideas for a fantasy story involving a lost map and a hidden city."
# Low temperature: More predictable, focused output
response_low_temp = call_llm(prompt=prompt, temperature=0.2, max_tokens=150)
print(f"Response (Temp=0.2):\n{response_low_temp}\n")
# High temperature: More creative, diverse output
response_high_temp = call_llm(prompt=prompt, temperature=0.9, max_tokens=150)
print(f"Response (Temp=0.9):\n{response_high_temp}")
# Note: Running the high-temperature prompt multiple times will likely yield
# significantly different results each time, while the low-temperature
# prompt will produce more similar outputs.
Expected Observation:
The low-temperature (0.2) response will likely provide standard fantasy tropes related to maps and hidden cities, perhaps quite similar each time you run it. The high-temperature (0.9) response should offer more varied and potentially unusual plot twists or character ideas. It might connect concepts in less obvious ways. Running the high-temperature prompt multiple times will likely produce noticeably different sets of ideas, showcasing the increased randomness.
Lower temperature sharpens the probability distribution, making the most likely next word much more probable. Higher temperature flattens the distribution, increasing the chance of selecting less common words.
Try This: Use a prompt asking the LLM to complete a sentence like "The spaceship landed on a planet made of...". Run it multiple times with temperature=0.1
and temperature=1.0
. Compare the consistency and creativity of the completions.
max_tokens
The max_tokens
parameter sets a limit on the length of the generated response (including the prompt in some models, but usually referring to the generated part). It's important for controlling costs, latency, and ensuring the output fits requirements.
Task: Ask the LLM to explain photosynthesis.
Prompt:
Explain the process of photosynthesis in simple terms.
Example Code (Comparing max_tokens
):
prompt = "Explain the process of photosynthesis in simple terms."
# Low max_tokens: Truncated output
response_short = call_llm(prompt=prompt, temperature=0.5, max_tokens=25)
print(f"Response (max_tokens=25):\n{response_short}\n")
# High max_tokens: More complete output
response_long = call_llm(prompt=prompt, temperature=0.5, max_tokens=150)
print(f"Response (max_tokens=150):\n{response_long}")
Expected Observation:
The response with max_tokens=25
will likely be cut off mid-explanation, possibly even mid-sentence. It provides only the beginning of the answer. The response with max_tokens=150
should provide a much more complete, though still simple, explanation of photosynthesis. This illustrates how max_tokens
acts as a hard limit on the generation length.
Try This: Experiment with a task like writing a short story. Set max_tokens
to a very small value (e.g., 10) and gradually increase it. Observe how the story develops until it feels complete or hits the token limit. Consider the trade-off: larger max_tokens
allows for more complete answers but increases API costs and response time.
These simple experiments demonstrate fundamental interactions with LLMs:
temperature
significantly impact the style (creative vs. predictable) of the output, while max_tokens
controls the length.These foundational skills are the starting point for more sophisticated prompt engineering. As you move through the course, you'll learn techniques to handle more complex tasks and improve the reliability of LLM responses in your applications. Keep experimenting! Try different prompts, tasks, and parameter combinations to build your intuition for how LLMs behave.
© 2025 ApX Machine Learning