While the previous chapter focused on managing datasets with DVC, developing machine learning models presents another set of management challenges. Training often involves numerous iterations with adjustments to hyperparameters, code, or feature engineering. Without a structured method, it becomes difficult to recall which settings led to specific results or to reproduce a past experiment accurately.
This chapter introduces MLflow Tracking as a solution for systematically recording your modeling work. We will cover how to:
Adopting these practices provides a clear record of your development process, supporting reproducibility and improving model iteration efficiency.
3.1 The Importance of Experiment Tracking
3.2 Introducing MLflow Tracking
3.3 Setting up MLflow
3.4 Logging Parameters and Metrics
3.5 Logging Artifacts (Models, Plots, Files)
3.6 Organizing Runs with Experiments
3.7 Using the MLflow UI
3.8 Comparing Experiment Runs
3.9 Practice: Tracking a Training Run
© 2025 ApX Machine Learning