In the previous chapter, you learned to make a basic generation call. The quality of an LLM's response, however, is directly dependent on the quality of the input you provide. This chapter focuses on the techniques for constructing effective inputs, a practice known as prompt engineering.
We will begin by using Kerb's template engine to build prompts that can be dynamically populated with variables. You will then see how to organize and version these prompts for maintainable applications. To improve model performance on specific tasks, we will implement few-shot prompting, which involves providing examples within the prompt itself.
A common requirement is to get structured data, not just free-form text, from a model. We will cover methods to guide the LLM into producing formatted output and then use the parsing module to reliably extract structured information, such as JSON objects and code blocks, from the model's response. These techniques give you more precise control over the model's behavior and the format of its output.
2.1 Introduction to Prompt Engineering
2.2 Creating Dynamic Prompts with the Template Engine
2.3 Managing and Versioning Prompts
2.4 Implementing Few-Shot Prompting
2.5 Extracting Structured Data from LLM Outputs
2.6 Parsing JSON and Code Snippets
© 2026 ApX Machine LearningEngineered with