Pydantic models are effective for managing different data structures, including straightforward, flat arrangements. However, machine learning inputs and outputs frequently require more intricate, nested structures. For example, you might need to send configuration parameters alongside input features, or return predictions along with confidence scores and metadata. Pydantic provides the capability to define these complex structures by nesting models within other models.
This approach aligns perfectly with how JSON naturally represents hierarchical data, making it straightforward to define precisely what your API expects and returns.
Creating a nested model in Pydantic is intuitive. You simply use another Pydantic model as the type annotation for a field within your main model.
Let's consider an example where our ML model requires not just the primary input data but also some configuration settings. We can define separate models for the configuration and the overall request structure.
from pydantic import BaseModel, Field
from typing import List, Optional
# Define a model for configuration settings
class ModelConfig(BaseModel):
model_version: str = "latest"
confidence_threshold: float = Field(default=0.7, ge=0.0, le=1.0)
return_probabilities: bool = False
# Define the main input data model
class InputFeatures(BaseModel):
sepal_length: float
sepal_width: float
petal_length: float
petal_width: float
# Define the overall request model, nesting ModelConfig and InputFeatures
class PredictionRequest(BaseModel):
request_id: str
features: InputFeatures # Nesting the InputFeatures model
config: Optional[ModelConfig] = None # Nesting ModelConfig, making it optional
In this PredictionRequest model:
features field is explicitly typed as InputFeatures. Pydantic expects the data for this field to conform to the InputFeatures schema.config field is typed as Optional[ModelConfig]. This means it expects data conforming to the ModelConfig schema, but it's also acceptable if this field is not provided in the request (it will default to None). If it is provided, it must be a valid ModelConfig structure.FastAPI integrates these nested Pydantic models. When you use PredictionRequest as a type hint for a request body parameter in your path operation function, FastAPI, powered by Pydantic, will automatically:
PredictionRequest, including the nested InputFeatures and ModelConfig structures if provided. It verifies data types (e.g., float for widths, str for request_id) and constraints (e.g., confidence_threshold between 0.0 and 1.0).PredictionRequest class, populated with the validated data.Here's how you might use PredictionRequest in an endpoint:
from fastapi import FastAPI
# Assume PredictionRequest, InputFeatures, ModelConfig are defined as above
app = FastAPI()
@app.post("/predict")
async def create_prediction(request: PredictionRequest):
# Access nested data easily
features_data = request.features
config_data = request.config if request.config else ModelConfig() # Use defaults if not provided
print(f"Received request: {request.request_id}")
print(f"Features: {features_data.dict()}")
print(f"Config: Version={config_data.model_version}, Threshold={config_data.confidence_threshold}")
# (Model inference logic would go here)
# ...
prediction = {"class": "setosa", "probability": 0.95} # Example output
return {"request_id": request.request_id, "prediction": prediction}
If a client sends a request with an invalid structure, like providing a string for sepal_length or omitting a required field like request_id, FastAPI will automatically return a 422 Unprocessable Entity error response detailing the validation issues, without your endpoint code even running.
Just as you structure complex inputs, you often need to structure complex outputs. For example, returning not just a prediction label but also associated probabilities or bounding boxes. You can use nested Pydantic models with the response_model parameter in your path operation decorator.
from pydantic import BaseModel
from typing import List, Dict
class PredictionResult(BaseModel):
predicted_class: str
probability: Optional[float] = None
class PredictionResponse(BaseModel):
request_id: str
results: List[PredictionResult] # List containing nested PredictionResult models
model_version_used: str
# Assume app and PredictionRequest are defined as above
@app.post("/predict_detailed", response_model=PredictionResponse)
async def create_detailed_prediction(request: PredictionRequest):
# (Model inference logic)
# Assume model predicts multiple results or probabilities
model_output = [
{"predicted_class": "setosa", "probability": 0.98},
{"predicted_class": "versicolor", "probability": 0.02},
]
config_data = request.config if request.config else ModelConfig()
# Construct the response conforming to PredictionResponse
response_data = PredictionResponse(
request_id=request.request_id,
results=[PredictionResult(**item) for item in model_output],
model_version_used=config_data.model_version
)
return response_data
By setting response_model=PredictionResponse, FastAPI ensures:
PredictionResponse schema (including the nested PredictionResult list).PredictionResponse are included in the final HTTP response, preventing accidental leakage of internal data.The following diagram illustrates the composition of the PredictionRequest model defined earlier.
The
PredictionRequestmodel contains an instance ofInputFeaturesand optionally an instance ofModelConfig.
Structuring your data models using nesting is a powerful way to handle the complexity inherent in many machine learning tasks, ensuring data integrity and clarity in your API definitions. This declarative approach using Pydantic significantly simplifies validation logic within your FastAPI application.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with