The Guide to MCP I never had

by AnmolApr 22, 202510 min read
MCP

Updated with the latest Composio docs, Tool Router, MCP Gateway, and everything that’s changed since MCP went mainstream.

AI agents have moved well past chat completion. They’re solving multi-step problems, coordinating workflows across dozens of apps, and operating with real autonomy. MCP — the Model Context Protocol — is the connective tissue behind many of these breakthroughs.

But MCP has evolved fast. The ecosystem in early 2025 looked nothing like what’s shipping today: MCP Gateways, Tool Routers, API-key enforcement, SDK-first workflows. If you’re just getting started — or catching up — this is the guide I wish existed when I began.

What We’re Covering

  1. Why existing AI tool integrations fall short

  2. What MCP actually is — core components explained

  3. How MCP works under the hood

  4. The problem MCP solves and why it matters

  5. The 3 layers of MCP (and how I finally understood them)

  6. Connecting 500+ managed MCP servers with Composio (the 2026 way)

  7. Six practical examples with use cases

  8. Limitations worth knowing about

1. Too Many APIs, Not Nearly Enough Context

Every tool you want an AI agent to use is a mini API integration. Imagine a user asks: "Did Anmol email me about yesterday’s meeting report?"

For the LLM to answer, it has to realize this is an email search task (not Slack or Notion), pick the correct endpoint like search_email_messages, parse the results, and summarize them in natural language — all while staying within its context window.

That’s a lot of cognitive load for a model. They often forget steps, guess at parameters, or hallucinate their way through multi-step flows. And if you can’t verify accuracy, you don’t even realize the problem.

APIs are step-based, but LLMs aren’t great at remembering steps

Take a basic CRM update. First, you get the contact ID via get_contact_id. Then you fetch their data with read_contact. Finally, you patch the update with patch_contact. In traditional code, you abstract this into a function. But with LLMs, each step is a chance for failure — a wrong parameter, a missed field, a broken chain.

The fragile tower of prompt engineering

APIs evolve. Docs change. Auth flows get updated. Your perfectly working agent can break overnight because a third-party changed something. And unlike traditional apps, there’s no shared framework or abstraction layer. Every AI tool integration is a fragile tower of prompt engineering and JSON crafting.

Vendor lock-in

Built your tools for GPT-4? If you switch to Claude or Gemini, you’re rewriting all your function descriptions and system prompts from scratch. There was no universal solution — until MCP.

2. What Is MCP? Core Components

Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context and tools to LLMs. Think of it as a universal plugin system for AI agents.

With MCP, your agent can send emails through Gmail, create tasks in Linear, search documents in Notion, post messages in Slack, and update records in Salesforce — all by sending natural-language instructions through a standardized interface.

At its core, MCP follows a client-server architecture where a host application connects to multiple servers.

Core Components

  • MCP Hosts — apps like Claude Desktop, Cursor, Windsurf, or any AI tool that wants to access data via MCP.

  • MCP Clients — protocol clients that maintain 1:1 connections with MCP servers, acting as the communication bridge.

  • MCP Servers — lightweight programs that each expose specific capabilities (reading files, querying databases, sending emails) through the standardized protocol.

  • Prompts and Resources — files, databases, and services that MCP servers can securely access.

  • Remote Services — external APIs and cloud-based systems that MCP servers can connect to.

3. How MCP Works Under the Hood

Clients

Clients are your apps — Cursor, Claude Desktop, and others. Their job is to request available capabilities from an MCP server, present those capabilities (tools, resources, prompts) to the AI model, relay the AI’s tool usage requests back to the server, and return the results.

Servers

MCP servers serve as intermediaries between users/AI and external services. They offer a standardized JSON-RPC interface for tool and resource access, convert existing APIs into MCP-compatible capabilities, and handle authentication and communication standards.

Building Blocks: Tools, Resources, and Prompts

  • Tools represent actions an AI can perform, like search_emails or create_issue_linear. Read more

  • Resources represent data that an MCP server makes available to clients — file contents, database records, API responses, live system data. Each resource is identified by a unique URI. Read more

  • Prompts guide the AI on how to behave during tool usage. They act like operational guides helping the AI follow specific styles, workflows, or safety protocols. Read more

<aside> 🎯 Practical example: Imagine a Google Calendar MCP server. If you ask an AI to “reschedule all my meetings with Alice next week,” it may struggle with noisy API data. An MCP prompt can instruct the model to only modify matching events, extract them into a temporary resource, apply changes there, and sync back — clean, structured, reliable.

</aside>

4. The Problem MCP Solves

  • One common protocol = thousands of tools. Services describe what they can do using a consistent JSON-RPC format.

  • Clear separation of roles. The model thinks, tools act. Your agent doesn’t break every time Slack tweaks its API.

  • No vendor lock-in. You don’t have to redo tool descriptions when swapping GPT for Claude or Gemini.

  • Memory and multi-step workflows. MCP supports agents that remember things across tasks and chain actions together.

  • Fewer hallucinations. Clear, structured tool definitions help AI stay grounded and accurate.

<aside> ⚡ MCP turns the dream of a universal AI assistant into a practical reality for developers. The ability to compose actions into sophisticated workflows — with the AI handling the logic — is enabling a new era of intelligent automation.

</aside>

5. The 3 Layers of MCP

⚡ Model ↔ Context: “Talk to the LLM in a way it understands”

Imagine the Model as the brain of a robot. It can process information but needs clear instructions. Context provides those instructions. Telling a robot “Make me a sandwich” is too vague. Saying “Use this bread, ham, and cheese to make a sandwich” gives it context to understand and execute the task.

⚡ Context ↔ Protocol: “Give the LLM structured memory, tools, state”

Once the robot has instructions, it needs a way to follow them, remember details, and use tools. The Protocol is the system that enables this — it helps the robot remember ingredients, know how to handle the knife, and sequence the steps.

⚡ Protocol ↔ Runtime: “Actually run the AI agent”

The robot knows what to do (Context) and how to do it (Protocol). Now it needs to actually do it. The Runtime is the environment where the task comes to life — the kitchen where it all happens.

<aside> 🍽️ Restaurant analogy: The Model is the chef (knowledge and skills). Context is the menu (ingredients and how the meal should look). Protocol is the waiter (communicates the order and remembers allergies). Runtime is the kitchen (where tools, heat, and preparation come together).

</aside>

6. Connecting 500+ Managed MCP Servers with Composio (The 2026 Way)

The MCP landscape has evolved significantly. The old approach of running individual MCP servers with npx commands and managing separate mcp.json configs per app still works, but Composio’s platform has introduced two major upgrades: the Tool Router and the MCP Gateway.

What’s New in 2026

  • Tool Router (recommended). A single MCP endpoint that dynamically discovers and uses tools from 500+ integrations. Handles tool discovery, auth, and execution through a unified session.

  • MCP Gateway. For enterprise teams — a centralized control plane between AI agents and tools with SOC2/ISO certification, RBAC controls, and audit trails.

  • MCP API key enforcement. As of March 2026, new organizations have API key enforcement enabled by default for all MCP server requests.

  • New SDK. Composio ships @composio/core (TypeScript) and composio (Python) SDKs for programmatic MCP server management.

The Tool Router gives your agent a single MCP endpoint with dynamic tool access and context management built in.

import { Composio } from '@composio/core';

const composio = new Composio({ apiKey: process.env.COMPOSIO_API_KEY });

// Create a session for a user
const session = await composio.create("user-123", {
  toolkits: ["gmail", "slack", "notion"],
});

const mcpUrl = session.mcp.url;
from composio import Composio

composio = Composio(api_key="YOUR_API_KEY")

session = composio.create(
    user_id="user-123",
    toolkits=["gmail", "slack", "notion"]
)

mcp_url = session["mcp"]["url"]

Approach 2: Single Toolkit MCP

For dedicated, scoped servers with explicit tool allowlisting:

const server = await composio.mcp.create("my-gmail-server", {
  toolkits: [{ authConfigId: "ac_xyz123", toolkit: "gmail" }],
  allowedTools: ["GMAIL_FETCH_EMAILS", "GMAIL_SEND_EMAIL"]
});

const instance = await composio.mcp.generate("user-123", server.id);
console.log("MCP Server URL:", instance.url);

Using with AI Providers

OpenAI:

from openai import OpenAI

client = OpenAI(api_key="your-openai-api-key")
mcp_server_url = "<https://backend.composio.dev/v3/mcp/YOUR_SERVER_ID?user_id=YOUR_USER_ID>"

response = client.responses.create(
    model="gpt-5",
    tools=[{
        "type": "mcp",
        "server_label": "composio-server",
        "server_url": mcp_server_url,
        "require_approval": "never",
    }],
    input="What are my latest emails?",
)

Anthropic:

from anthropic import Anthropic

client = Anthropic(api_key="your-anthropic-api-key")
mcp_server_url = "<https://backend.composio.dev/v3/mcp/YOUR_SERVER_ID?user_id=YOUR_USER_ID>"

response = client.beta.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1000,
    messages=[{"role": "user", "content": "What are my latest emails?"}],
    mcp_servers=[{
        "type": "url",
        "url": mcp_server_url,
        "name": "composio-mcp-server"
    }],
)

What Composio Handles For You

  • Built-in auth supporting OAuth, API keys, JWT, and Basic Auth

  • 850+ managed integrations across Gmail, Slack, Notion, Linear, GitHub, Salesforce, and more

  • 20,000+ pre-built API actions for quick integration without coding

  • MCP API key enforcement (default for new orgs as of March 2026)

  • Server management APIs to list, update, and delete MCP servers programmatically

8. Limitations Worth Knowing About

Platform support is uneven

Claude and tools like Cursor and Windsurf support MCP directly. But ChatGPT or local models might not work out of the box. Support is growing but not universal.

Agent autonomy is imperfect

MCP gives agents the ability to use tools, but judgment is still a work in progress. Tool use depends on how well the model understands tool descriptions and usage context.

Performance overhead

Each MCP tool call is external and can be slower than the AI answering from training data. If you’re orchestrating multiple tools in sequence, latencies compound.

The trust issue

Most tools today are either fully autonomous or not at all. The best pattern: the AI drafts the action and confirms before executing.

Scalability is evolving

Most MCP servers today are built for single users. This is where MCP Gateways come in: Composio’s gateway architecture provides a central control plane for managing security, observability, and operational complexity at scale.

Security requires attention

MCP doesn’t come with built-in authentication in the base protocol. Composio addresses this with MCP API key enforcement (default since March 2026), SOC2/ISO certification, sandboxed execution, and RBAC controls.

Microsoft’s security guide

What’s Changed Since Early 2025

  • From npx commands to SDK-first workflows. The Composio SDK lets you create and manage MCP servers programmatically.

  • Tool Router replaces individual server setup. A single endpoint with dynamic tool discovery across 500+ apps.

  • MCP Gateway for enterprise. Centralized reverse proxy for auth, observability, rate limiting, and governance.

  • API key enforcement by default. Closing the security gap of open MCP URLs.

  • Broader framework support. Direct integration with OpenAI, Anthropic, Mastra, AutoGen, LangChain, and Claude Agent SDK.

  • Rube: the all-in-one MCP. A single server that automatically discovers and selects the right tools, keeping LLM context clean.

Frequently Asked Questions

What is MCP?

Model Context Protocol is an open standard that enables AI agents to access tools and data consistently. It separates what the model thinks from what tools do.

What’s the difference between Tool Router and Single Toolkit MCP?

The Tool Router gives your agent a single endpoint with dynamic access to all toolkits. Single Toolkit MCP creates a dedicated server for one specific toolkit with explicit tool allowlisting. Tool Router is recommended for most use cases.

What is an MCP Gateway?

A specialized reverse proxy between AI agents and tools, providing centralized auth, observability, rate limiting, and governance. Think API Gateway, but purpose-built for AI agent communication patterns.

What are the limitations?

Support is uneven across platforms, agent judgment is imperfect, performance overhead adds up with multi-tool chains, and you need to manage security and prompt hygiene when exposing powerful tools.

A
AuthorAnmol

Share