You've set up your environment and successfully run a local Large Language Model. The next step is to communicate with it effectively. This chapter focuses on the core technique for interacting with LLMs: prompting.
You will learn how to structure your input, known as a prompt, to get desired responses from the model. We will cover asking simple questions, providing instructions for tasks like summarization, and understanding the concept of the context window, which influences how much of the conversation the model remembers. We will also briefly look at settings like temperature that can affect the randomness of the generated text. By the end of this chapter, you'll be able to perform basic interactions with your local LLM.
5.1 What is a Prompt?
5.2 Your First Prompt: Simple Questions
5.3 Giving Instructions
5.4 Understanding Context Window
5.5 Basic Prompt Formatting Tips
5.6 Temperature and Creativity
5.7 Common Interaction Patterns
5.8 Practice: Simple Prompting Techniques
© 2025 ApX Machine Learning