Connecting a language model to a structured database transforms it from a text generator into a data analyst. While Resources allow you to feed raw text or logs to a model, they are often insufficient for large datasets where context window limits apply. A Tool, however, allows the Large Language Model (LLM) to request specific slices of data using Structured Query Language (SQL).
In this section, we will implement an MCP server that exposes a SQLite database to an LLM. This requires creating a secure environment where the model can construct queries, the server executes them, and the results are returned as a JSON string.
Before writing code, it is helpful to visualize the data flow. Unlike a static Resource read, a Tool invocation involves a bidirectional exchange where the client supplies arguments (the SQL query) and the server returns the execution result.
The diagram illustrates the request lifecycle. The server acts as an intermediary, validating the tool arguments before the logic layer interacts with the physical database file.
For this exercise, we will use sqlite3 which is included in the Python standard library, and the mcp SDK. We will use the FastMCP class, which simplifies the initialization of the JSON-RPC connection and capability negotiation.
First, create a dedicated directory for this project and initialize a virtual environment.
mkdir mcp-db-tool
cd mcp-db-tool
python -m venv venv
source venv/bin/activate # On Windows use: venv\Scripts\activate
pip install mcp
To test our tool, we need data. Create a file named setup_db.py to initialize a dummy e-commerce inventory. This script will create a table and insert sample records.
import sqlite3
def create_dummy_data():
conn = sqlite3.connect("inventory.db")
cursor = conn.cursor()
# Create table
cursor.execute("""
CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
category TEXT,
price REAL,
stock_level INTEGER
)
""")
# Insert data
products = [
("Laptop Pro", "Electronics", 1299.99, 45),
("Wireless Mouse", "Electronics", 29.99, 150),
("Ergonomic Chair", "Furniture", 350.00, 20),
("Noise Cancelling Headphones", "Audio", 199.50, 60),
("USB-C Cable", "Accessories", 12.99, 300)
]
cursor.executemany("INSERT INTO products (name, category, price, stock_level) VALUES (?, ?, ?, ?)", products)
conn.commit()
conn.close()
print("Database 'inventory.db' created with sample data.")
if __name__ == "__main__":
create_dummy_data()
Run this script once to generate the inventory.db file in your directory.
Now we create the actual MCP server. Create a file named server.py. This file will define the tool interface.
In the code below, pay close attention to the docstring of the query_products function. The MCP SDK uses this docstring to generate the description field in the tool definition that is sent to the LLM. The more descriptive this text is, the better the model will be at formulating valid queries.
from mcp.server.fastmcp import FastMCP
import sqlite3
import json
# Initialize the server
mcp = FastMCP("Inventory Explorer")
@mcp.tool()
def query_products(sql_query: str) -> str:
"""
Execute a SQL query against the products database.
The database has a table named 'products' with columns:
id (INTEGER), name (TEXT), category (TEXT), price (REAL), stock_level (INTEGER).
Only SELECT queries are permitted.
"""
# Basic Security Check: Read-Only Enforcement
if not sql_query.strip().upper().startswith("SELECT"):
raise ValueError("Only SELECT queries are allowed for safety.")
try:
# Connect to database in read-only mode if supported by URI,
# otherwise rely on the logic check above.
conn = sqlite3.connect("inventory.db")
cursor = conn.cursor()
cursor.execute(sql_query)
# Fetch all results
rows = cursor.fetchall()
# Get column names to make the result self-describing
column_names = [description[0] for description in cursor.description]
# Combine columns and rows into a list of dictionaries
results = [dict(zip(column_names, row)) for row in rows]
conn.close()
# Return results as a JSON string
return json.dumps(results, indent=2)
except sqlite3.Error as e:
return f"Database Error: {str(e)}"
except Exception as e:
return f"Execution Error: {str(e)}"
if __name__ == "__main__":
mcp.run()
There are several implementation details worth noting in the code above.
Type Hinting for Validation
The argument sql_query: str is not just for readability. The MCP SDK uses Python type hints to construct the JSON Schema for the tool. When the client (LLM) calls this tool, it knows it must provide a string. If we had used a Pydantic model for more complex arguments (like filtering by price range specifically), the SDK would enforce those constraints before the function ever runs.
Error Handling as Context
Notice that we catch exceptions and return them as strings rather than letting the server crash. In an MCP architecture, an error message is valid context. If the LLM writes a malformed SQL query (e.g., SELEECT * FROM products), returning "Database Error: near 'SELEECT': syntax error" allows the LLM to see its mistake and self-correct in the next turn of the conversation.
Security Considerations
Giving an LLM ability to execute code or SQL is inherently risky. In our example, we added a basic check startswith("SELECT"). In a production environment, this is insufficient. You should prefer:
SELECT permissions.get_products_by_category(category: str) where the SQL is hardcoded in your Python function and the LLM only supplies parameters.When the server starts, it sends a tools/list response to the client. Based on our code, the generated schema will look similar to the JSON structure below. This is what the LLM "sees" to understand how to interact with your database.
{
"name": "query_products",
"description": "Execute a SQL query against the products database. The database has a table named 'products'...",
"inputSchema": {
"type": "object",
"properties": {
"sql_query": {
"type": "string"
}
},
"required": ["sql_query"]
}
}
To test this implementation, you can use the MCP Inspector or integrate it directly with a client. If you are using the Claude Desktop App, you would add the following to your configuration file to register the server:
"mcpServers": {
"inventory": {
"command": "uv",
"args": [
"run",
"server.py"
]
}
}
Once connected, you can ask the model natural language questions such as:
The model will translate these intents into SQL, invoke query_products, and use the JSON response to answer your question.
You might wonder why we did not use a Resource for this. A Resource is best for static data, like reading a specific log file (file://logs/error.txt). If we used a Resource for the database, we would have to dump the entire database content into the context window every time the model needed information.
By using a Tool, we allow the model to perform retrieval. The model decides exactly what subset of data it needs. This reduces token usage and latency, and allows for operations that are impossible with static resources, such as aggregation (counting, averaging) performed by the database engine rather than the LLM.
Was this section helpful?
sqlite3 - DB-API 2.0 interface for SQLite databases, Python Software Foundation, 2024 (Python Software Foundation) - The official documentation for Python's built-in sqlite3 module, which is directly used in the section's hands-on example for database interaction, offering comprehensive details on its API and usage.© 2026 ApX Machine LearningEngineered with