Fine-tuning large language models (LLMs) is a standard technique for adapting them to specific tasks. However, modifying all parameters of models containing billions of parameters, often represented as a weight matrix W, incurs substantial computational and memory costs. This chapter examines these challenges inherent in full fine-tuning.
You will learn about:
This foundational knowledge will illustrate why PEFT methods have become essential and introduce the categories of techniques explored throughout this course.
1.1 Computational Costs of Full Fine-Tuning
1.2 The Parameter Efficiency Imperative
1.3 Mathematical Preliminaries: Singular Value Decomposition
1.4 Taxonomy of Parameter-Efficient Fine-Tuning Methods
© 2025 ApX Machine Learning