The implementation of a resource read operation serves as the mechanism that transforms a static Uniform Resource Identifier (URI) into accessible content for the Large Language Model (LLM). While the resource list provides a catalog of available data points, the read handler performs the actual retrieval. In a synchronous context, this process involves receiving a request, locating the underlying data, formatted it correctly, and returning it immediately to the client.
When an LLM selects a resource to inspect, the client sends a JSON-RPC request with the method resources/read. This request contains the specific URI the model wishes to access. The server must route this URI to the correct internal handler function.
The lifecycle of a synchronous read consists of three distinct phases:
The following diagram outlines the flow of data when a client initiates a read request.
Data flow for a synchronous resource read request illustrating the routing and retrieval steps.
In the Python SDK, handling resource reads involves decorating a function that accepts a URI as an argument. The SDK manages the underlying JSON-RPC communication, allowing you to focus on the retrieval logic.
The handler must return the content in a specific format. The protocol defines a ReadResourceResult which contains a list of content items. Most commonly, you will return TextContent, which includes the uri, the text body, and a mimeType.
Consider a server designed to expose system logs. The URI scheme might follow the pattern logs://system/{log_level}. The implementation requires defining a function that parses this URI and filters the log data accordingly.
from mcp.server.fastmcp import FastMCP
# Initialize the server
mcp = FastMCP("LogServer")
# Mock data store
LOG_DATA = {
"error": "Critical failure in module X\nDatabase connection lost",
"info": "Service started on port 8080\nHealth check passed",
"debug": "Variable state dump: {x: 1, y: 2}"
}
@mcp.resource("logs://system/{level}")
def read_log(level: str) -> str:
"""
Reads the system log for a specific severity level.
"""
# Access the requested data synchronously
content = LOG_DATA.get(level)
if content is None:
# Returning an error message in the content is often safer
# than raising an exception for simple lookups
return f"No logs found for level: {level}"
return content
In this example, the pattern logs://system/{level} automatically extracts the level variable from the incoming URI. If a client requests logs://system/error, the SDK invokes read_log("error") and wraps the returned string in a valid resource response object automatically.
While text is the most common format for LLM context, resources often need to represent structured data or binary content.
For structured data, it is best practice to serialize the object to JSON before returning it. This ensures the LLM receives a syntactically correct string that it can parse easily. You should set the MIME type to application/json to provide a hint to the model regarding the content structure.
import json
@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: str) -> str:
# Simulating a database lookup
user_data = {
"id": user_id,
"role": "admin",
"last_login": "2023-10-27T10:00:00Z"
}
# Return serialized JSON
return json.dumps(user_data, indent=2)
Providing indentation in the JSON output increases the token count slightly but significantly improves readability for the model, aiding in more accurate processing of the data structure.
Synchronous reads often require handling URIs that were not explicitly listed in the resource catalog. While you might list file://project/readme.md explicitly, you may want to support reading any file in a directory using a wildcard or pattern.
When implementing the handler, you must ensure that the dynamic segment of the URI leads to valid data. This introduces a security consideration: path traversal. When a resource handler accepts a dynamic argument that acts as a file path or database identifier, validation is necessary to prevent unauthorized access.
The following diagram illustrates the decision logic required when processing dynamic URIs.
Logic flow for validating and processing dynamic resource parameters.
When a resource cannot be read, the server must communicate this failure clearly. In the MCP architecture, you have two primary options for handling errors during a read operation:
For LLM interactions, the second approach is often superior. If a model requests a file that does not exist, receiving a JSON-RPC error might cause the tool use chain to break or the model to hallucinate a reason for the failure. Returning a text result such as "Error: File not found at path X" allows the model to read the error as context and potentially correct its own mistake by requesting a different path.
@mcp.resource("file://{path}")
def read_file(path: str) -> str:
try:
# Validate that the path is relative to a safe directory
if ".." in path or path.startswith("/"):
raise ValueError("Access denied")
with open(path, "r") as f:
return f.read()
except FileNotFoundError:
return f"Error: The file '{path}' does not exist."
except ValueError as e:
return f"Error: {str(e)}"
except Exception:
return "Error: An internal error occurred while reading the file."
Synchronous reads block the request processing thread. While the server is reading a file or querying a database, it cannot process other messages on that specific connection if the implementation is single-threaded or blocking.
For local file reads, the latency is negligible. However, if your resource fetches data from a slow external API, the delay adds directly to the time the user waits for a response.
If the retrieval time is expected to be significant (e.g., longer than 500ms), you should consider caching the result in memory. Since resources are requested via specific URIs, these URIs make excellent cache keys.
Tresponse=Tnetwork+Tprocessing+Tlookup
Minimizing Tlookup through caching ensures that the context assembly phase of the LLM interaction remains fluid. Simple Python dictionaries or LRU (Least Recently Used) cache decorators are effective strategies for resources that do not change frequently.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with