Okay, you have a basic Flask application running, and somewhere within your application's memory, you've loaded your trained machine learning model using techniques like pickle
or joblib
. That's great progress! But how does someone actually use the model? How can another program send data to your application and get a prediction back?
This is where the concept of an endpoint comes in. Think of your Flask application as a small web server. An endpoint is like a specific address or URL path on that server that is designated to handle a particular type of request. When an external client (like a web browser, a mobile app, or another backend service) sends a request to that specific URL path, Flask knows which part of your Python code should handle it.
@app.route()
Flask makes defining these endpoints remarkably straightforward using a feature called decorators. Specifically, we use the @app.route()
decorator. A decorator in Python is a special kind of function that modifies or enhances another function. In Flask, @app.route()
is placed directly above a standard Python function definition. Its job is to tell Flask: "Whenever a web request arrives for the URL path specified inside the parentheses, execute the function defined immediately below."
Let's make this concrete. We want to create an endpoint specifically for handling prediction requests. A common and descriptive URL path for this purpose is /predict
. Here’s how you'd start defining it in your Flask application file (often named app.py
or similar):
# Assume 'app' is your Flask application instance, already created like:
# from flask import Flask
# app = Flask(__name__)
# Also assume 'model' is your loaded machine learning model
# Define the prediction endpoint
@app.route('/predict')
def handle_prediction():
# This function will eventually contain the prediction logic
# For now, it just returns a simple text message
return "Prediction requests will be handled here."
# (Your app.run() call would be elsewhere, typically at the end of the file)
In this snippet:
@app.route('/predict')
: This is the decorator. It registers the URL path /predict
with Flask.def handle_prediction():
: This is the regular Python function that Flask will execute when a request hits /predict
. We've given it a descriptive name, handle_prediction
.return "Prediction requests will be handled here."
: For now, this function simply returns a basic string. Later, we will modify this to perform actual predictions.Web communication relies on HTTP methods (also called verbs) to indicate the intended action for a request. The two most common methods you'll encounter initially are GET
and POST
.
GET
: Used primarily for retrieving data. When you type a URL into your browser's address bar, you're typically making a GET
request. Data is often passed directly in the URL itself (as query parameters, like ?name=value
).POST
: Used primarily for sending data to a server to create or update a resource, or to trigger a process that requires input data. The data is sent in the body of the request, not usually visible in the URL.For making predictions with a machine learning model, we almost always need to send input data (the features the model expects). This input data can sometimes be complex or large (think of image data or long text). Sending this data in the URL (as GET
might do) is impractical and often has length limitations. Therefore, the standard practice for prediction endpoints is to use the POST
method. The client will send the input features in the request body, typically formatted as JSON (which we'll cover in the next section).
How do we tell Flask that our /predict
endpoint should only respond to POST
requests? We modify the @app.route()
decorator by adding a methods
argument:
from flask import Flask # Make sure Flask is imported
# Assume app = Flask(__name__) and model is loaded
@app.route('/predict', methods=['POST']) # Specify that this route accepts POST requests
def predict(): # Function name can be anything, 'predict' is common
# --- Placeholder Logic ---
# 1. We will add code here later to get input data from the POST request.
# 2. We will add code here later to use the 'model' to make a prediction.
# 3. We will add code here later to format and return the prediction.
# --- End Placeholder Logic ---
# For now, just confirm the endpoint is reached via POST
return "Received a POST request on /predict"
# (Remember to include app.run() if this is your main script)
# if __name__ == '__main__':
# # Load your model here first if not already loaded globally
# # e.g., model = joblib.load('your_model.pkl')
# app.run(debug=True) # debug=True is helpful during development
Now, our Flask application knows that the function predict()
should only be executed when a POST
request arrives at the /predict
URL path. If someone tries to access /predict
using a GET
request (e.g., typing it directly into a browser), Flask will typically return an error indicating the method is not allowed.
You have successfully defined the specific "door" (/predict
using POST
) through which prediction requests will enter your application. The next steps involve writing the code inside the predict()
function to actually handle the incoming data, use the model, and send back a meaningful response.
© 2025 ApX Machine Learning