By Wei Ming T. on Dec 5, 2024
Machine learning (ML) model deployment can be complex, especially when transitioning from development to production. Docker, a containerization platform, simplifies this process by enabling you to package your application and its dependencies into a portable and scalable unit. This guide walks through deploying ML models with Docker, covering CI/CD pipelines, multi-stage builds, and security best practices.
Docker has emerged as the go-to tool for ML deployment for several reasons:
Before starting, ensure you have:
First, write a script to serve predictions using Flask:
# app.py
import pickle
from flask import Flask, request, jsonify
# Load the pre-trained model
with open("model.pkl", "rb") as f:
model = pickle.load(f)
# Initialize Flask app
app = Flask(__name__)
@app.route("/predict", methods=["POST"])
def predict():
try:
data = request.get_json()
predictions = model.predict([data["input"]])
return jsonify({"predictions": predictions.tolist()})
except Exception as e:
return jsonify({"error": str(e)}), 400
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
Create a Dockerfile for building your Docker image:
# Use the official lightweight Python image
FROM python:3.10-slim
# Set the working directory
WORKDIR /app
# Copy the dependencies file and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code
COPY . .
# Expose Flask's default port
EXPOSE 5000
# Run the application
CMD ["python", "app.py"]
Create a requirements.txt file:
flask
scikit-learn
Add other required libraries as needed (TensorFlow, PyTorch, pandas, etc.).
Build your Docker image with:
docker build -t my-ml-model .
Launch the containerized application:
docker run -p 5000:5000 my-ml-model
Your API will be accessible at http://localhost:5000/predict.
Test the deployed model using curl:
curl -X POST -H "Content-Type: application/json" -d '{"input": [5.1, 3.5, 1.4, 0.2]}' http://localhost:5000/predict
Share your container image:
docker tag my-ml-model your-dockerhub-username/my-ml-model
docker push your-dockerhub-username/my-ml-model
Reduce image size using multi-stage builds:
# Stage 1: Build dependencies
FROM python:3.10-slim as builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt --target=/app/dependencies
# Stage 2: Create runtime environment
FROM python:3.10-slim
WORKDIR /app
COPY --from=builder /app/dependencies /app/dependencies
COPY . .
ENV PYTHONPATH=/app/dependencies
CMD ["python", "app.py"]
Automate building and deploying Docker images with:
Deploying machine learning models with Docker is more accessible and efficient than ever. This guide has covered:
Docker's portability and scalability provide excellent tools for modern ML deployment challenges. Start with basic deployments and gradually incorporate advanced techniques like CI/CD and Kubernetes orchestration as your needs grow.
© 2024 ApX Machine Learning. All rights reserved.
Learn Data Science & Machine Learning
Machine Learning Tools
Featured Posts