Having established the foundations of ensemble methods and additive modeling, this chapter focuses on the mechanics of the standard Gradient Boosting Machine (GBM) algorithm. We move from the general concept to a specific, step-by-step understanding of how these models are constructed.
You will learn to view the boosting process through the lens of optimization, specifically as gradient descent performed in function space. We will formally derive the generic GBM algorithm, clarifying how residuals or pseudo-residuals guide the training of each sequential base learner (typically a decision tree).
Key aspects covered include:
GradientBoostingRegressor
and GradientBoostingClassifier
.By the end of this chapter, you will have a detailed operational understanding of the classic GBM algorithm, preparing you for the more advanced implementations discussed later. A hands-on section will guide you through building your first GBM model using Python.
2.1 Functional Gradient Descent
2.2 Deriving the Generic GBM Algorithm
2.3 Common Loss Functions for Regression
2.4 Common Loss Functions for Classification
2.5 The Role of Shrinkage (Learning Rate)
2.6 Subsampling Techniques (Stochastic Gradient Boosting)
2.7 Implementing GBM with Scikit-learn
2.8 Practice: Building a Basic GBM Model
Ā© 2025 ApX Machine Learning