Let's put what you've learned about interacting with pre-trained Large Language Models into practice. In the previous sections, we discussed finding LLM services, using web interfaces, and making basic API calls. Now, you'll combine these ideas to perform a straightforward text generation task. This exercise is designed to give you hands-on experience sending instructions to an LLM and seeing what comes back.
Our goal is simple: ask an LLM to write a short paragraph describing a sunny day at the beach. This common task allows you to focus on the interaction process itself without needing specialized knowledge.
You can perform this task using either a web interface or a basic API call, based on what you learned earlier in this chapter.
Write a short paragraph describing a sunny day at the beach.
Notice how the prompt is a direct instruction. It specifies the desired content ("sunny day at the beach") and hints at the length ("short paragraph"). Clear instructions generally lead to better results.If you choose the API route, you'll need the API endpoint URL and your API key, as discussed in "Finding and Choosing an LLM Service" and "Introduction to Using LLM APIs."
Prepare Your Request: You'll send an HTTP POST request. A common way to do this from the command line is using curl
. The structure will look something like this (remember to replace the placeholders):
# Replace YOUR_API_ENDPOINT with the actual URL provided by the service
# Replace YOUR_API_KEY with your unique key
curl YOUR_API_ENDPOINT \
-X POST \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Write a short paragraph describing a sunny day at the beach.",
"max_tokens": 100
}'
# Lookup the document of any LLM provider for their specific API specs.
# E.g. https://platform.openai.com/docs/api-reference/introduction
Understand the Components:
YOUR_API_ENDPOINT
: The web address where the LLM service listens for requests.-X POST
: Specifies that this is a POST request, used for sending data.-H "Authorization: Bearer YOUR_API_KEY"
: The header used for authentication. Your API key tells the service who is making the request.-H "Content-Type: application/json"
: This header tells the server that the data you're sending (-d
) is in JSON format.-d '{...}'
: The data payload. It's a JSON object containing:
"prompt"
: Your instruction to the LLM."max_tokens"
(Example Parameter): Many APIs allow parameters to control the output. max_tokens
often limits the length of the response. We've set it to 100 here as an example; the exact parameter name and function might vary between services.Execute the Request: Run this command in your terminal.
Interpret the Response: The server will send back a JSON response. As you learned in "Interpreting LLM Responses," you'll need to look inside this JSON structure to find the generated text. It might be nested under a key like text
, choices
, or generations
.
Example JSON Response Structure (Conceptual):
{
"id": "cmpl-xxxxxxxxxxxx",
"object": "text_completion",
"created": 1678886400,
"model": "some-model-name-v1",
"choices": [
{
"text": "\nThe sun beamed down, warming the soft golden sand underfoot. Gentle waves lapped at the shore, creating a soothing rhythm, while children's laughter echoed faintly in the distance. A light, salty breeze rustled through the nearby palm trees, offering a perfect escape.",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
]
}
In this example, the generated paragraph is inside choices[0].text
.
Whichever method you used, look at the result:
Now, try experimenting:
This process of writing a prompt, observing the output, and refining the prompt based on the result is fundamental to working effectively with LLMs.
You've now successfully used a pre-trained LLM to perform a basic text generation task! This hands-on experience forms a building block for tackling more complex interactions and applications explored in further studies.
© 2025 ApX Machine Learning