You Gave Your Analysts a Copilot. Who Owns the Output?
AI-assisted analytics is spreading fast through data teams. The output looks the same as before. The accountability structure does not.
MCP and A2A solve two different problems in your agent architecture. Most teams are still building custom plumbing for both. Here is when to use which, and how they work together.
Every time you connect an AI agent to a database, you write an integration. Every time you connect that agent to a second database, you write another one. Swap the model, rewrite the integration. Add a new vendor's agent to the mix, and someone builds something custom so they can talk.
This is where most teams building with AI agents are right now. Not stuck on the AI part. Stuck on the plumbing.
Two open protocols exist to fix this: MCP (Model Context Protocol) and A2A (Agent-to-Agent Protocol). They are not competing standards. They solve different problems at different layers of the stack. But the naming is confusing enough that most engineering teams either pick one when they need both, or ignore both and keep hand-wiring.
If you have built anything with AI agents beyond a single chatbot, you have hit this wall. Your agent needs data from a CRM, access to a code repository, the ability to write to a database. Someone on your team writes a connector. It works. Then you swap models, or add a second agent from a different framework, and that connector breaks.
Multiply this across every tool and every agent in your system. Google's developer documentation calls this the "M×N problem": M agents multiplied by N tools, each needing its own integration.[1] At five agents and ten tools, you are maintaining fifty custom connectors. At enterprise scale, the number becomes absurd.
This is not a problem you need to solve from scratch. It has been solved. The protocols just need adopting.
MCP was released by Anthropic in November 2024 as an open standard for connecting a single AI agent to external data sources and tools.[2]
The architecture has three parts. An MCP host is where your AI agent runs. An MCP server sits below it and knows how to communicate with a specific resource: a file system, a database, a code repository, a third-party API. The agent never touches the resource directly. It talks to the MCP server through three primitives:
The transport depends on where the server runs. Local servers (say, an IDE plugin reading your file system) use standard input/output. Remote servers use HTTP with streaming support. Both use JSON-RPC for the message format.[3]
The practical value: write an MCP server once for your CRM, and any MCP-compatible agent can use it. Swap the model, swap the host application, the server stays the same. And because MCP is open, pre-built servers already exist for Slack, GitHub, PostgreSQL, file systems, and dozens of other common tools.[2]
Within twelve months of its launch, MCP became the de facto standard for agent-to-tool connections. OpenAI, Microsoft, and Google all added MCP support to their agent frameworks.[4]
MCP handles the connection between an agent and its tools. It does not handle what happens when you have multiple agents, built by different teams or vendors, that need to coordinate with each other.
A2A exists for this. Agent-to-Agent Protocol was launched by Google in April 2025, backed by over 50 technology partners including Salesforce, SAP, Atlassian, and ServiceNow, and is now housed under the Linux Foundation.[5]
Each agent publishes an agent card: a standardised descriptor that tells other agents what it can do and what kinds of input it works with. Other agents discover these cards dynamically, work out what skills are available, and delegate tasks accordingly. Think of it as a machine-readable job description.
Communication runs over plain HTTP using JSON-RPC 2.0. Agents exchange structured messages that carry text, images, files, or structured data the same way.[5] For long-running work, A2A supports streaming updates via server-sent events, so one agent can push progress to another in near real time.
A2A is not for connecting an agent to your database. It is for connecting your agent to another agent that knows something yours does not, or can do something yours cannot.
Take a retail company running an inventory management system. The inventory agent needs access to product databases and stock-level records. It connects through MCP. One MCP server for the product catalogue, another for stock levels. The agent reads current inventory, checks reorder thresholds, and writes updated records, all through a standard interface that any compatible agent could reuse tomorrow with zero rework.
When the inventory agent detects a product running low, it needs to talk to an internal ordering agent. That ordering agent, in turn, needs to negotiate with external supplier agents built by entirely different organisations on entirely different stacks. This is A2A. The agents discover each other through agent cards, exchange task requests over HTTP, and stream progress updates as orders are placed and confirmed.
MCP handles the vertical connection: agent to data. A2A handles the horizontal connection: agent to agent. Pull either protocol out and you are back to writing custom integrations for every link in the chain.
The two protocols were designed to work together. MCP gives each individual agent the context it needs to do useful work. A2A gives multiple agents the ability to collaborate without requiring everyone to build on the same framework.
If you are building a single-agent system that needs tool access, start with MCP. The ecosystem is mature. Pre-built servers already cover most common tools, and you can write a custom server for anything they do not.
If you are building or planning a multi-agent system where agents from different vendors need to coordinate, A2A is worth investing in now. The protocol is younger and adoption is earlier-stage, but the architecture is clean and the backing from Google, Salesforce, SAP, and others is substantial.[5]
If you are doing both, and most production agent systems will eventually need to, implement MCP first for tool access, then layer A2A on top for agent coordination. That ordering matches how most systems grow naturally.
The worst move is the one most teams are currently making. Building custom connectors for everything and hoping the maintenance stays manageable. It will not.
A note from fusecup
At fusecup, we work with engineering and product leaders building agent systems that need to scale beyond a single prototype. If you are sorting out how MCP and A2A fit into your architecture, or trying to decide which to adopt first, we are happy to talk it through. No agenda, no pitch. Just a practical conversation about what makes sense for where you are right now.
AI-assisted analytics is spreading fast through data teams. The output looks the same as before. The accountability structure does not.
95% of developers use AI tools weekly. 4.3 million AI repositories exist on GitHub. The data from open source proves that AI-assisted development is not a trend. It is the baseline. The only question left is whether you are being intentional about it.
AI-native companies are entering established markets with radically lower headcount and margins that make traditional service firms look uncompetitive. Here is what is actually happening and why it matters if you sell services, software, or expertise.