This chapter introduces the fundamental principles of model customization. While a pre-trained Large Language Model possesses general capabilities, fine-tuning is the process of adapting it for a specific function or domain. We begin by defining what fine-tuning is and how it differs from the initial pre-training process in its objectives, data requirements, and computational cost.
A central topic here is the practical decision of when to fine-tune. We present an analytical framework to help you determine if fine-tuning is the correct approach for your problem, comparing it to alternatives like advanced prompting or retrieval-augmented generation (RAG). Following this, you will get a high-level summary of different customization strategies, from full parameter updates to more computationally efficient methods. The connection between fine-tuning and the principles of transfer learning is also explained.
To prepare for the practical work ahead, the chapter concludes with a guide to configuring your development environment. We will walk through the setup of PyTorch, the Transformers library, and other components from the Hugging Face ecosystem, ensuring you have a working setup for the sections that follow.
1.1 What is Fine-Tuning?
1.2 Pre-training vs. Fine-Tuning
1.3 When to Fine-Tune: An Analytical Framework
1.4 Overview of Fine-Tuning Strategies
1.5 The Role of Transfer Learning in LLMs
1.6 Setting Up Your Development Environment
© 2026 ApX Machine LearningEngineered with