After instrumenting your training scripts to log experiments with the MLflow Tracking API, the next step is to visualize and analyze this information. While the API allows programmatic access, the MLflow User Interface (UI) provides a powerful web-based dashboard for interactive exploration, comparison, and management of your experiment runs.
To start the UI, navigate to your project's root directory (or any directory containing the mlruns
folder if you used the default local file store) in your terminal and run the command:
mlflow ui
By default, this command launches a local web server, typically accessible at http://127.0.0.1:5000
. MLflow will print the exact URL in your terminal.
If you configured MLflow to use a different backend store (like a database or remote tracking server) when logging runs, you need to point the UI to the same location. Use the --backend-store-uri
flag:
# Example for a PostgreSQL backend
mlflow ui --backend-store-uri postgresql://user:password@host:port/database
# Example for a remote tracking server
mlflow ui --backend-store-uri http://your-mlflow-server.com:5000
Open the provided URL in your web browser to access the interface.
The MLflow UI presents a clean and organized view of your experiments and runs.
The default view usually lists all your experiments in the left-hand navigation panel. An experiment groups related runs, such as different attempts to train a model for a specific task. Clicking on an experiment name filters the main view to show only the runs associated with that experiment. The "Default" experiment is created automatically if you don't specify one when logging.
When an experiment is selected, the main area displays a table listing the individual runs. Each row represents a single execution of your tracked script (mlflow.start_run()
). Key information is presented in columns:
learning_rate
, n_estimators
).accuracy
, rmse
, val_loss
). You typically see the final logged value here.You can customize the displayed columns, sort by any column (e.g., sort by accuracy
descending to find the best run), and filter runs based on parameter values or metric scores using the search box.
Clicking on the start time or name of a specific run takes you to its detail page. This page provides a comprehensive view of everything logged for that single run:
The Run Detail page is where you examine the specifics of an individual experiment trial.
.pkl
, .h5
, saved_model
directory).One of the most valuable features of the MLflow UI is its ability to compare multiple runs side-by-side.
This opens a dedicated comparison view:
learning_rate
vs. accuracy
) or between two metrics.Scatter plot comparing Validation Accuracy against Learning Rate for five different experiment runs.
These comparison tools significantly accelerate the process of hyperparameter tuning and model selection by visually organizing the results of multiple trials.
For experiments with many runs, the search functionality is indispensable. You can construct queries based on:
metrics.accuracy > 0.9
params.learning_rate = '0.01'
or params.optimizer = 'Adam'
tags.data_version = 'v2'
attributes.status = 'FINISHED'
Combine these using AND
to create specific filters, allowing you to quickly find runs matching complex criteria.
The MLflow UI transforms experiment tracking from a passive logging activity into an active analysis and decision-making tool. By providing intuitive ways to view, sort, filter, compare, and deep-dive into individual runs, it helps you understand your model's behavior, identify optimal configurations, and ultimately build better machine learning models more efficiently.
© 2025 ApX Machine Learning