The core of any LangChain application is the interaction between your code and a Large Language Model (LLM). This chapter covers the three primary components that manage this interaction: Models, Prompts, and Parsers.
You will start by learning how to interface with different types of models, specifically LLMs and Chat Models, and understand when to use each. Following that, we will address how to construct precise and dynamic instructions for these models using PromptTemplates. You will also see how to improve model responses by providing examples within the prompt, a technique known as few-shot prompting. Finally, we will cover Output Parsers, which are necessary for converting a model's free-form text response into a structured and usable format, such as JSON.
These components form a standard invocation sequence, often represented as Prompt→Model→Parser. The chapter culminates in a practical exercise where you will combine these elements to build an application that extracts structured data from a block of text.
2.1 Interfacing with LLMs and Chat Models
2.2 Managing Prompts with PromptTemplates
2.3 Implementing Few-Shot Prompting
2.4 Structuring Output with Parsers
2.5 Hands-on Practical: Building a Structured Data Extractor
© 2026 ApX Machine LearningEngineered with