Understanding the architecture of the Model Context Protocol (MCP) requires distinguishing between the logical roles components play and their physical deployment. Unlike a traditional client-server REST API where a single client often speaks to a monolithic server, MCP utilizes a star topology. A single controlling application manages connections to multiple, independent context providers.
This design decouples the intelligence of the system (the Large Language Model) from the specific implementation details of the data sources. We categorize the system components into three distinct roles: the Host, the Client, and the Server.
In an MCP ecosystem, responsibility is sharded to ensure modularity and security.
The Server is the foundational unit of context. It is a standalone process or web service that exposes three specific primitives: Resources, Prompts, and Tools. An MCP Server does not contain its own LLM, nor does it maintain conversation history. Its sole purpose is to respond to standardized JSON-RPC requests.
For example, a "PostgreSQL Server" knows how to execute SQL queries against a database, but it does not know why the query is being executed or which user asked for it. It operates strictly on the inputs provided by the protocol connection.
The Client is the protocol implementation responsible for maintaining a 1:1 connection with a Server. It handles the handshake, capability negotiation, and message transport. In most implementations, the Client is a library (like the official TypeScript or Python SDKs) integrated into a larger application.
The Host is the application that users interact with, such as an IDE (VS Code), a chat interface (Claude Desktop), or an AI agent runtime. The Host creates the environment where the LLM operates. Crucially, the Host manages the lifecycle of the Client-Server connections.
It is common to conflate the Client and the Host because they run within the same process. However, the distinction is important:
tools/list message", "Parse JSON-RPC response").The standard topology involves a single Host Application instantiating multiple MCP Clients. Each Client connects to a distinct MCP Server. This creates a 1:N relationship where one user interface aggregates capabilities from many isolated data sources.
The Host Application aggregates connections. Each Client manages a dedicated pipe to a specific Server, isolating the data contexts.
The most common configuration for MCP is the local integration model. In this scenario, the Host Application spawns the MCP Server as a subprocess.
The communication relies on Standard Input/Output (stdio). The Host launches the Server executable (e.g., uvx mcp-server-git) and attaches to its stdin and stdout streams.
stdin.stdout.stderr.This topology offers significant security benefits. Because the Server runs as a subprocess started by the user, it inherits the user's local permissions but operates within the confines of the process tree. If the Host terminates, the operating system ensures the Server process is also terminated, preventing orphaned processes.
While local subprocesses handle personal data effectively, enterprise architectures often require centralized context. In a remote topology, the Server runs as a standalone web service, often within a Docker container or a cloud function.
The Host connects to this remote Server using Server-Sent Events (SSE) for the transport layer.
/mcp/messages).This configuration changes the trust model. In a local topology, the Host trusts the Server binary explicitly by executing it. In a remote topology, the Host must authenticate the Server endpoint and ensure the transport is encrypted (HTTPS), as data travels over the network.
A defining characteristic of the MCP topology is that Servers do not communicate with each other. They are completely isolated.
If a user asks an LLM to "Read the file data.csv and insert it into the SQL database," the Server responsible for the file system does not send data to the database Server. Instead, the data flows through the Host:
This centralized routing ensures that the Host maintains control over the flow of information. It prevents a compromised or malfunctioning Server from accessing resources in another Server without explicit orchestration by the Host.
When the topology is first established during the connection phase, the Client and Server engage in a handshake to declare capabilities. This allows the architecture to be version-agnostic and flexible.
The Server sends a initialize result containing a capabilities object. This object defines which primitives the Server supports.
Capabilities={resources?,prompts?,tools?,logging?}
If a Server does not declare support for tools, the Client knows not to present any tool-use capabilities to the LLM for that specific connection. This negotiation step allows a Host to connect to a legacy Server without breaking, or to a specialized Server that only provides Prompts but no executable Tools.
The topology of MCP is designed to be rigid in structure (Star topology) but flexible in capability (Negotiation). By strictly defining the roles of Client, Host, and Server, the protocol ensures that developers can build servers that are universally compatible with any MCP-compliant host application.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with