MCP vs A2A: Everything you need to know

Am I a bit late to talk about MCP and A2A protocols? I hope not! Both have been all over the internet, and they are mind-blowing!! A race is occurring right now, and nobody wants to be left behind in introducing new models and tools.
A few months back, Anthropic released MCP (Model Context Protocol) for agents, and it got good traction from the community. Recently, we saw Openai’s integration with MCP. MCP tells you how the agent will communicate with the APIs and makes our multiple tool calling easier.
Now, Google has released an A2A (Agent2Agent) protocol to streamline agent communication. So, in short, A2A standardises between agent communication while MCP standardises agent-to-tools communication.
So, yes, they are not competing but complementing each other. Google has extended support for MCP in the Agents Development Kit (ADK).

This blog post explains how they work together to standardise building production-ready AI agents.
Let’s first discuss the MCP and then proceed with the A2A protocol to see how both work.
Understanding MCPs – The Role of the Model Context Protocol
MCP stands for Model Context Protocol, an open standard developed by Anthropic. It defines a structured and efficient way for applications to provide external context to large language models (LLMs), like Claude, GPT, etc. Think of it like USB for AI — it lets AI models connect to external tools and data sources in a standardised way.
What’s the Core Problem MCP Solves?
MCP has three critical components.
- • Client: The client maintains a 1:1 connection with the servers, handles all the LLM routing and orchestration, and negotiates capabilities with the servers.
- • Server: These are API services, databases, and logs that the LLMs can access to complete tasks. Servers expose tools that the LLMs use to accomplish tasks.
- • Protocol: The core protocol standardises communication between clients and servers.
For an in-depth guide on MCP, the architecture, and internal workings, check out this article: Model Context Protocol (MCP): Explained.
In short, MCP allows app developers to build clients (Cursor, Windsurf, etc) and server developers to create API servers without worrying about each other’s implementation. A client app or MCP host can connect to any MCP server and vice versa.
Each tool implementation is different in it’s own way. They’re structured differently:
- • Different field names (start_time vs event_time)
- • Different auth schemes (OAuth, API key, JWT, etc.)
- • Different error messages and formats
MCP standardizes how the servers are built. While you still have to write the integration logic for each app you need(or you can use Composio). MCP allows any server built by anyone to be integrated with any MCP client. This standardisation makes it easy for millions of developers.
MCP abstracts away the differences, making it far easier for the LLM to send commands without needing custom logic for every specific tool.
You can think of it like this:
- • User: Adds Google Calendar MCP server to the client app, like Cursor IDE.
- • The client app fetches all the server-exposed tools. And add it to the LLM context.
- • User: “Schedule a team sync on Thursday at 3 PM.”
- • MCP client: The LLM receives the message, understands it needs to call a tool, and then calls it with required parameters. (You need to finish authentication first.)
- • Calendar MCP Server: Executes the tool with the required parameters
- • Result: The meeting is created without you manually configuring anything.
So instead of spending hours wiring up six services with brittle code, you now have a modular interface that speaks one clean language, regardless of the tool behind it.
By the way, MCP despite all it’s merits is not great for production use cases, from security issues, to reliability issues it can be very tough to manage multiple Server intergrations hence why we at Composio are building the safest and most robust MCP infra for your AI workflows.
Agent2Agent Protocol by Google
Google introduced the Agent-to-Agent Protocol (A2A), which is inspired by Anthropic’s Model Context Protocol (MCP). While MCP focused on agent-to-server communication, A2A is about agent-to-agent interoperability.

Let’s say I’m using a travel assistant agent to plan a train trip from Delhi to Mumbai.
That travel agent would have its own LLM, but it could also connect to other specialised agents, like a train booking agent, a hotel booking agent, and maybe even a cab service agent.
So I could say:
“Plan a full trip from Delhi to Mumbai, book my train, find a hotel near the station, and arrange local transport.”
And this travel agent would figure out:
- • Which train to take?
- • Which hotel to book?
- • And what cab or ride service should I schedule for pickup?
Behind the scenes, it uses the A2A protocol to link itself with the other agents, like forming a mini-team, each handling their part of the job. It’s all coordinated and handled automatically.
That’s the power of agent-to-agent communication: modular, connected, and way more intelligent than manually doing it all.
A2A Design Principles
At its core, A2A (Agent-to-Agent) enables flexible and intelligent communication between autonomous agents, regardless of who built them or what ecosystem they belong to. The protocol is shaped around five foundational design principles that make it adaptable, extensible, and future-proof.
A2A in a nutshell
Key advantages
- True agentic behaviour – Agents run independently and cooperate without shared state or central control.
- Familiar tech stack – Pure HTTP + Server‑Sent Events + JSON‑RPC: fits straight into existing back‑end workflows.
- Enterprise‑grade security – Built‑in auth / authz on par with OpenAPI.
- Short‑ or long‑running tasks – Real‑time progress, state tracking, and human‑in‑the‑loop support.
- Modality‑agnostic – Handles text, audio, video, and other rich media.
How A2A Works?
Stage | What happens |
---|---|
Capability discovery | Agents publish Agent Cards (JSON) that list skills, modalities, and constraints. |
Task lifecycle | A client agent delegates a task; the remote agent updates status until it delivers an artefact (result). |
Collaboration | Agents exchange messages, artefacts, and context—more than simple command/response. |
UX negotiation | Every message is broken into typed parts (text, chart, image, video, form, etc.), letting the sender tailor output to the client’s UI abilities. |
Key Concepts of A2A Protocol
1. Multi-Agent Collaboration
- • Agents can now share tasks, communicate results, and work collaboratively across ecosystems.
- • Think of a recruiting agent talking to a company’s internal hiring agent, or a delivery app agent coordinating between McDonald’s agents and Subway agents
2. Open & Extensible
- • A2A is an open protocol with contributions from over 50 tech companies (like Atlassian, Box, Langchain, PayPal, HCL, Infosys, etc.).
- • Built on existing standards like:
- • JSON-RPC
- • Service & Event Descriptions
3. Secure by Default
- • Supports authentication and authorisation based on widely used identity frameworks (e.g., OpenID Connect).
- • Uses well-known endpoint conventions for agent discovery (e.g., .well-known/agent.json).
Working of A2A With Examples
Architecture Example:
Let’s imagine a system involving three agents within a productivity suite:
Calendar Agent:
- • Hosted on its own server.
- • Pulls availability data from Google Calendar via its MCP interface.
Document Agent:
- • It also uses its own MCP server to retrieve documents and meeting notes from Notion or Google Docs.
Assistant Agent:
- • A user-facing LLM-based agent that delegates tasks to respective agents.
Flow:
A user says to the Assistant Agent:
“Schedule a meeting with Alice and summarize the key points from our last document.”
The Assistant Agent, using the A2A protocol, does the following:
- 1. Contacts the Calendar Agent to check Alice’s and the user’s mutual availability.
- 2. Contact the Document Agent to pull the relevant document and summarize its content.
Each backend agent connects to its respective services (Google Calendar, Google Doc etc.) using MCP, pulls the required data, and then responds through A2A.
So in this setup:
- • A2A is responsible for smooth agent-to-agent communication.
- • MCP bridges the gap between agents and Calendar app.
Agent Discovery (Inspired by OpenID Connect)
Now, how do these agents know about each other?
Every organisation that hosts an agent exposes a discovery URL like:
yourdomain.com/.well-known/agent.json
This JSON file acts like a profile, and typically includes:
- • Agent name and description
- • Declared capabilities
- • Example queries it can handle
- • Supported modalities and protocols
This approach is inspired by OpenID Connect’s discovery mechanism (.well-known/openid-configuration), making agents discoverable and interoperable without needing tight coupling or manual configuration.
All these agents register themselves using .well-known/agent.json, enabling any new agent in the ecosystem to dynamically discover, assess, and interact with them, thanks to A2A’s standard messaging and coordination format.
A2A vs MCP
Feature | MCP (Model Context Protocol) | A2A (Agent-to-Agent Protocol) |
Communication | Agent ↔ External Systems/APIs | Agent ↔ Agent |
Goal | API Integration | Collaboration & Interoperability |
Layer | Backend (Data/API access) | Mid-layer (Agent Network) |
Tech Standard | REST, JSON, DB Drivers | JSON-RPC, Services, Events |
Inspired By | Language Server Protocol (LSP) | OpenID Connect, Service Discovery |
MCP provides the tools each agent uses, while A2A facilitates the collaboration between agents. They complement each other, ensuring both the execution of individual tasks and the coordination of complex, multi-step processes.
While MCP equips agents with the necessary tools to perform specific tasks, A2A enables these agents to collaborate, ensuring a cohesive and efficient experience.
Both Anthropic’s MCP and Google’s A2A protocols facilitate interaction between AI systems and external components, but they cater to different scenarios and architectures.
Category | Anthropic MCP | Google A2A |
Main Objective | Tailored for linking a single AI model with external tools and data pipelines. | Designed to support interaction between autonomous AI agents across environments. |
Best Fit Scenario | Ideal for enterprise systems needing controlled and secure data access. | Suited for distributed B2B use cases where multiple AI agents need to coordinate. |
Communication Protocol | Local: STDIO; Remote: HTTP with Server-Sent Events (SSE) for real-time responses. | HTTP/HTTPS enhanced with webhooks and SSE to support async, scalable messaging. |
Service Discovery | Based on fixed server settings; connections are manually defined. | Uses Agent Cards to find and connect with compatible capabilities dynamically. |
Interaction Pattern | Top-down approach—LLM accesses external resources directly. | Peer-to-peer collaboration—agents interact with each other as equals. |
Security Approach | Emphasises secure interactions across trust boundaries in multi-agent setups. | Tailored to link a single AI model with external tools and data pipelines. |
Workflow Handling | Optimized for simple, direct request-and-response flows. | Built for managing ongoing tasks with state tracking and lifecycle awareness. |
1. Communication
MCP: Structured Schemas
- • In MCP (Multi-Call Protocol), the interaction is explicit and schema-driven.
- • The assistant knows exactly what tool to call, what arguments to pass, and in what format.
- • Flow: AI Assistant → Tool with structured input → Tool returns raw result.
MCP Flow:
- • AI sends: get_weather_forecast(Tokyo, 2025-04-22)
- • Tool returns: “Sunny, 22°C”
- • AI just displays the result.
A2A: Natural Language
- • A2A (Agent-to-Agent) is much more conversation-style, using natural language tasks.
- • Tasks are expressed like real user queries, and agents internally decide how to interpret them.
- • Flow: User Agent → Task in plain English → Target Agent processes → Responds naturally.
A2A Flow:
- • User says: “Can you tell me the weather in Tokyo on April 22nd and current $NVDA price?”
- • Agent routes to the appropriate Finance/Weather Agent
- • Response might be: “Sure! The forecast for Tokyo on April 22nd is sunny with a high of 22°C. or $NVDA price currently is $101.42 down by 0.064%”
2. Task Management
MCP: Single-Stage Execution
- • MCP handles tasks like a classic function call.
- • You call the function (or “tool”) and immediately get a response: either a success with the result or a failure (error/exception).
- • The whole process is immediate and atomic, one shot, one answer.
A2A: Multi-Stage Lifecycle
- • A2A treats tasks like long-running jobs.
- • Tasks have multiple possible states:
- • pending → waiting to start
- • running → work in progress (can even provide partial results!)
- • completed → final result ready
- • failed → something went wrong
- • You can check back anytime to see progress, grab partial data, or wait for the full result.
3. Capability Specification
MCP: Low-Level, Instruction-Based
MCP capabilities are described with very strict schemas, usually in JSON Schema format. They are about precision and control, like telling a machine exactly what to do and how to do it.
{
"name": "book_table",
"description": "Books a table at a restaurant",
"inputSchema": {
"type": "object",
"properties": {
"restaurant": { "type": "string" },
"date": { "type": "string", "format": "date" },
"time": { "type": "string", "pattern": "^\\d{2}:\\d{2}$" },
"party_size": { "type": "integer", "minimum": 1 }
},
"required": ["restaurant", "date", "time", "party_size"]
}
}
A2A: High-Level, Goal-Oriented
In contrast, A2A uses an Agent Card to describe capabilities regarding goals, roles, and expertise. It’s like explaining what someone is good at and trusting them to handle it.
agent_card = AgentCard(
id="restaurant-agent",
name="Dining Assistant",
description="Helps users find and book tables at restaurants.",
agent_skills=[
AgentSkill(
id="table_booking",
name="Table Booking",
description="Can search restaurants and book tables as per user preferences.",
examples=[
"Book a table for 4 at an Italian place this Friday night.",
"Find a quiet restaurant near downtown and reserve for two people."
]
)
]
)
- • MCP allows you add skills (API services, Databases, records, etc) to your agents.
- • A2A gives you flexibility, judgment, and delegation power. Think of a team of thoughtful coworkers.
- • They’re like pairing an engineer (MCP) with a project manager (A2A). One does exact work; the other handles the chaos.
How to use MCP with A2A
One way to use MCP servers in A2A agents is via Google Agent Development Kit. So, first install the ADK.
Python example
pip install google-adk
Import necessary modules
# ./adk_agent_samples/mcp_agent/agent.py
import asyncio
from dotenv import load_dotenv
from google.genai import types
from google.adk.agents.llm_agent import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.adk.artifacts.in_memory_artifact_service import InMemoryArtifactService # Optional
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, SseServerParams, StdioServerParameters
# Load environment variables from .env file in the parent directory
load_dotenv('.env')
Configure the MCP server and fetch tools
# --- Step 1: Import Tools from MCP Server ---
async def get_tools_async():
"""Gets tools from the File System MCP Server."""
print("Attempting to connect to MCP Filesystem server...")
tools, exit_stack = await MCPToolset.from_server(
connection_params=SseServerParams(url="https://mcp.composio.dev/gmail/tinkling-faint-car-f6g1zk")
)
print("MCP Toolset created successfully.")
return tools, exit_stack
Here, we have used the HTTP SSE URL for Gmail server from the mcp.composio.dev.
For a STDIO based tool
async def get_tools_async():
"""Gets tools from the File System MCP Server."""
print("Attempting to connect to MCP Filesystem server...")
tools, exit_stack = await MCPToolset.from_server(
connection_params=StdioServerParameters(
command='npx',
args=["-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/your/folder"],
)
)
print("MCP Toolset created successfully.")
return tools, exit_stack
Create the agent
async def get_agent_async():
"""Creates an ADK Agent equipped with tools from the MCP Server."""
tools, exit_stack = await get_tools_async()
print(f"Fetched {len(tools)} tools from MCP server.")
root_agent = LlmAgent(
model='gemini-2.0-flash', # Adjust if needed
name='maps_assistant',
instruction='Help user with mapping and directions using available tools.',
tools=tools,
)
return root_agent, exit_stack
Define the main
function.
async def async_main():
session_service = InMemorySessionService()
artifacts_service = InMemoryArtifactService() # Optional
session = session_service.create_session(
state={}, app_name='mcp_maps_app', user_id='user_maps'
)
# TODO: Use specific addresses for reliable results with this server
query = "What is the route from 1600 Amphitheatre Pkwy to 1165 Borregas Ave"
print(f"User Query: '{query}'")
content = types.Content(role='user', parts=[types.Part(text=query)])
root_agent, exit_stack = await get_agent_async()
runner = Runner(
app_name='mcp_maps_app',
agent=root_agent,
artifact_service=artifacts_service, # Optional
session_service=session_service,
)
print("Running agent...")
events_async = runner.run_async(
session_id=session.id, user_id=session.user_id, new_message=content
)
async for event in events_async:
print(f"Event received: {event}")
print("Closing MCP server connection...")
await exit_stack.aclose()
print("Cleanup complete.")
if __name__ == '__main__':
try:
asyncio.run(async_main())
except Exception as e:
print(f"An error occurred: {e}")
Now, run the application and see the agents in action.
Conclusion
MCP makes it easier for agents to communicate with the tools that wrap external application services, and Agent2Agent makes it easier for multiple agents to communicate and collaborate. Both MCP and Agent2Agent are steps in the direction of standardising agent development. It would be interesting to see how they transform the agentic ecosystem.