Developing a Model Context Protocol server often presents an architectural dilemma: when you need to expose dynamic data, should you wrap it in a Tool or define it as a Resource? In the previous sections, we established that Tools are executable functions and Resources are read-only data. However, the line blurs when data must be calculated on the fly or retrieved from a database based on specific parameters.A computed resource is a resource that does not map to a static file on a disk. Instead, its content is generated programmatically at the moment of the request. This allows you to expose dynamic system states, such as current memory usage, the latest row in a database table, or a live API response, using the same standardized URI interface used for static text files.The Architectural DistinctionThe decision between implementing a Tool and a Computed Resource fundamentally changes how the Large Language Model (LLM) interacts with your data. This choice depends on whether the interaction is best modeled as a function call (RPC) or as a data retrieval (REST-like GET).When you define a Tool, you provide the model with agency. The model explicitly decides to invoke the tool, selecting arguments based on the conversation context. This is an active process suitable for actions that might fail, require complex parameters, or change the state of the system.When you define a Computed Resource, you provide the model with context. The model sees a URI (Uniform Resource Identifier) and treats the content as a piece of reference material. This is a passive process suitable for data that identifies a specific entity and can be read safely without side effects.The following decision tree outlines the logic for choosing between these primitives.digraph G { rankdir=TB; node [shape=rect, style=filled, fontname="Arial", fontsize=12, margin=0.2]; edge [fontname="Arial", fontsize=10, color="#adb5bd"]; start [label="Data Access Requirement", fillcolor="#e9ecef", color="#dee2e6"]; side_effect [label="Does it modify system state\n(Create, Update, Delete)?", fillcolor="#a5d8ff", color="#74c0fc"]; complex_args [label="Does it require multiple\ncomplex arguments?", fillcolor="#a5d8ff", color="#74c0fc"]; identity [label="Is the data identified\nby a unique ID or Path?", fillcolor="#a5d8ff", color="#74c0fc"]; tool_node [label="Implement as Tool", fillcolor="#ffc9c9", color="#ff8787", shape=note]; resource_node [label="Implement as\nComputed Resource", fillcolor="#b2f2bb", color="#69db7c", shape=note]; start -> side_effect; side_effect -> tool_node [label="Yes"]; side_effect -> complex_args [label="No (Read Only)"]; complex_args -> tool_node [label="Yes (Structured JSON)"]; complex_args -> identity [label="No (Single ID/Path)"]; identity -> resource_node [label="Yes (Addressable)"]; identity -> tool_node [label="No (Search/Query)"]; }Logic flow for distinguishing between Tools and Resources based on state modification, argument complexity, and data identity.Implementing Dynamic URIsTo implement a computed resource, you must rely on URI templates. Unlike static resources where every file has a fixed path, computed resources use patterns to route requests to a handler function.In the Python SDK, for example, you define a resource template using generic distinctors. If you are building a system to monitor server metrics, you might define a URI scheme like system://metrics/{host}. When the client requests system://metrics/localhost, your handler parses the {host} variable and executes the logic to fetch CPU and memory stats for that specific host.Consider a scenario where we want to expose user profiles from a database. We could create a tool named get_user_profile(user_id), but since this is a read-only operation returning a standard document, a resource pattern is often more appropriate.The implementation involves two distinct phases: listing the pattern and handling the read request.Phase 1: Exposing the CapabilityFirst, the server must inform the client that it can handle these dynamic requests. This is done during the resources/list capability exchange. You expose a template rather than a concrete list of every possible user.Phase 2: Handling the RequestWhen the server receives a resources/read request, it matches the incoming URI against your registered templates. If a match is found, it extracts the variables and passes them to your logic handler.$$URI_{request} = \text{scheme} + \text{://} + \text{path} + \text{variables}$$Here is an example of how a computed resource is structured in Python to fetch user data dynamically:from mcp.server.fastmcp import FastMCP import sqlite3 mcp = FastMCP("UserDirectory") @mcp.resource("users://{user_id}/profile") def get_user_profile(user_id: str) -> str: """ Dynamically fetches a user profile from the database. """ # Connect to database (In production, use a connection pool) conn = sqlite3.connect("users.db") cursor = conn.cursor() # Secure parameter substitution prevents SQL injection cursor.execute("SELECT name, email, role FROM users WHERE id = ?", (user_id,)) row = cursor.fetchone() conn.close() if row: return f"Name: {row[0]}\nEmail: {row[1]}\nRole: {row[2]}" else: raise ValueError(f"User {user_id} not found.")In this example, the @mcp.resource decorator acts as the router. The string users://{user_id}/profile tells the MCP server to invoke this function whenever a client requests a URI matching that pattern.Context Window and SubscriptionsA significant advantage of using computed resources over tools is the integration with the resource subscription model. If the data changes frequently, a Resource allows the client to subscribe to updates.If you use a Tool to fetch a stock price, the LLM receives a snapshot of the price at that moment. If the price changes five seconds later, the model has no way of knowing unless it decides to call the tool again.If you use a Computed Resource (e.g., stock://AAPL/price), the client can subscribe to this URI. When your internal logic detects a price change, you can send a notification to the client. The client can then automatically fetch the new content and update the context window without the LLM needing to take any action. This creates a responsive data loop that is difficult to achieve with tools alone.Complexity and ValidationsWhile resources are powerful, they lack the input validation schema that Tools possess. Tools use JSON Schema (often generated via Pydantic) to enforce strict types on arguments. Resources rely entirely on string parsing of the URI.If your data retrieval requires complex filtering, such as "Find all users who signed up last week and have a premium subscription", packing those parameters into a URI string becomes unwieldy and non-standard.$$URI = \text{users://search?after=2023-10-01&tier=premium}$$While the URI above is valid, parsing query strings manually in a resource handler is error-prone compared to the structured validation provided by Tool definitions. Therefore, if the data access requires more than one or two simple identifier arguments, or if the arguments are optional and combinatorial, a Tool is the superior architectural choice.Hybrid ApproachesAdvanced MCP implementations often use a hybrid approach. You can provide a Tool for search and discovery, which returns a list of Resource URIs.The Tool: search_users(query="engineering") returns a list of simplified objects:[ {"name": "Alice", "uri": "users://101/profile"}, {"name": "Bob", "uri": "users://102/profile"} ]The Resource: The LLM reads the output of the tool, sees the URIs, and can then request the full content of users://101/profile if it determines that specific detail is relevant.This separation of concerns keeps your tools focused on "finding" and your resources focused on "reading," optimizing both the context window usage and the logical flow of the application.