AI Voice Agents

MCP, A2A, and the Emerging Standards for Agentic AI — What Every Business Needs to Know

Workforce Wave

April 17, 202615 min read
#a2a#agentic-ai#ai-standards#api#infrastructure#mcp

In 1991, Tim Berners-Lee published the HTTP specification. It was a technical document describing a protocol for transferring hypertext — web pages — over a network. Its eventual impact was the entire modern web economy: e-commerce, social media, SaaS, streaming, remote work. None of that was obvious in 1991. What was happening was a standards formation event — a moment when an infrastructure protocol became ubiquitous enough to serve as the foundational layer for everything built on top of it.

In 1982, Jon Postel published SMTP — the Simple Mail Transfer Protocol. It became the standard for electronic mail exchange between servers. From that standard came email as a universal business communication layer, and everything that built on top of it — newsletters, transactional notifications, support tickets, marketing automation.

We are in the middle of a similar standards formation event right now. It is happening faster and with less public awareness than either HTTP or SMTP. The protocols are called MCP (Model Context Protocol) and A2A (Agent-to-Agent protocol). They are doing for AI agents what HTTP did for web documents and SMTP did for email messages.

The businesses that understand these protocols early — that build infrastructure on them, that evaluate their software vendors against them, that ask the right questions about readiness — will have structural advantages that compound over the next five years. The businesses that discover them late will spend those five years catching up.

This guide is a plain-language explanation of what MCP and A2A are, why they matter, how they work together, and what businesses should do about them now.


Section 1: The Protocol Moment

To understand why MCP and A2A matter, you need to understand the problem they solve.

As of early 2026, most AI assistants — Claude, GPT-4o, Gemini, Cursor — are fundamentally disconnected from the world. They can reason about information in their training data. They can process documents you paste into the conversation. But they cannot, on their own, look up a real-time piece of data from your CRM, book an appointment in your scheduling system, run a query against your database, or check the current status of an order. They are extraordinarily capable thinking tools that are largely locked out of taking real-world actions.

The current workaround is custom integrations. Every time a developer wants an AI to interact with an external system, they write a custom integration: an API call, a data transformation, a response parser. This is fine for one integration. It does not scale. A business with 20 software systems would need to build 20 custom integrations for each AI model they want to use — and if they switch models, rebuild them. An AI platform wanting to support 500 business tools would need to build 500 individual connectors.

This is exactly the problem that the internet faced in the early 1990s with document transfer. Every organization had its own document format, its own transfer protocol, its own client software. What solved it was a universal standard — HTTP — that defined how any client could request any document from any server, regardless of who built either side.

MCP is HTTP for AI tool use. It defines a universal standard for how any AI client can request access to any tool or data source from any MCP server, regardless of who built either side. Once a business software system publishes an MCP server, every AI that implements the MCP standard can use it. Once an AI implements the MCP standard, it can use every business software system that publishes a server.

This is a combinatorial explosion of capability. It is already happening.

The Numbers

Anthropic published MCP as an open standard in November 2024. By early 2026, the MCP SDK was reaching approximately 97 million monthly downloads. Enterprise software companies have moved quickly: Salesforce has integrated MCP into Agentforce; Oracle has built MCP support into Fusion Cloud; Stripe published an MCP server that lets AI systems create payment intents, look up customers, and process refunds using natural language. GitHub, Atlassian, Linear, and hundreds of smaller SaaS companies have published MCP servers.

The pattern mirrors exactly what happened with REST APIs in the early 2010s: a standard emerged, enterprises adopted it, and within a few years it became the assumed foundation for how software systems expose their capabilities.


Section 2: MCP — What It Is and Why It Matters

The Architecture

MCP defines three roles in an interaction:

The MCP Client is the AI system — Claude, GPT-4o, Gemini, or any AI assistant implementing the protocol. The client sends requests describing what it wants to do.

The MCP Server is a software system — a CRM, a scheduling tool, a database, a voice AI platform — that exposes its capabilities through the MCP interface. The server describes what tools and resources are available, receives requests for them, and returns results.

The Tools and Resources are the specific capabilities the server exposes. A CRM's MCP server might expose tools for looking up a contact, creating a lead, updating a deal status, and running a report. A voice AI platform's MCP server might expose tools for provisioning an agent, querying call transcripts, updating knowledge base documents, and checking agent performance metrics.

The communication protocol between client and server is standardized: the client can ask the server what tools are available (a discovery call), and the server returns a manifest. The client can then invoke any tool in the manifest with the appropriate parameters, and the server executes the operation and returns a result.

What MCP Makes Possible

The clearest way to understand MCP is through an example of what it enables that was previously impractical.

A dental practice manager is using Claude as a work assistant. Without MCP, they can ask Claude to help draft a patient communication or explain a CDT code. Claude can help with language and reasoning tasks. It cannot interact with the practice's actual systems.

With an MCP server published by their voice AI platform, the practice manager can — from inside the same Claude conversation — say: "What was our call volume last Tuesday between 8am and noon? Were there any intents we failed to resolve? And update the knowledge base to add that we now accept CareCredit."

Claude sends a discovery call to the voice AI MCP server, receives the tools manifest (tools for call analytics, intent resolution queries, knowledge base management), executes the appropriate tool calls, and returns the results in the conversation. The practice manager didn't switch applications. They didn't log in to a dashboard. They used natural language to query and modify a live business system.

This is what MCP makes possible: AI assistants that are genuinely connected to the business systems their users work with, accessible through natural language, without custom integration work for every AI-system pair.

The WFW MCP Server

At /api/v2/mcp, a Workforce Wave MCP server exposes the full operational surface of the platform. From inside a Claude Code session, a developer or an AI system can:

  • Provision a new agent for a client (passing configuration parameters, receiving back the provisioned agent ID and phone number)
  • Query call transcripts for a specific agent or date range
  • Search the knowledge base for a specific document or topic
  • Update knowledge base documents with new content
  • Check agent performance metrics (intent resolution rate, escalation rate, average call duration)
  • Retrieve compliance audit logs
  • List and configure active integrations

The design principle is that everything accessible through the WFW dashboard is equally accessible through the MCP server — because AI systems deserve the same access that human users have.


Section 3: A2A — What It Is and Why It Matters

If MCP is the standard for AI-to-tool interaction, A2A is the standard for AI-to-AI interaction. They are complementary standards that together enable a fully agentic enterprise.

The Discovery Problem

As AI agents proliferate, a coordination problem emerges: how does one AI agent know that another AI agent exists, what it can do, and how to interact with it?

In human terms, this is like the early days of the phone book. Phones existed. Connections were possible. But there was no systematic way to discover the number you needed unless someone told you. The phone book — a standardized directory of names and numbers — solved the discovery problem.

A2A solves the AI agent discovery problem through a concept called the Agent Card: a machine-readable JSON document published at a well-known URL (/.well-known/agent.json) that describes an AI agent's identity, capabilities, supported interaction protocols, and endpoint addresses.

When a hospital's AI system wants to interact with a dental practice's voice AI agent, it doesn't need a human to introduce them. It sends an HTTP request to the dental practice's known domain at /.well-known/agent.json. The Agent Card returns: who this agent is, what interactions it supports (scheduling, eligibility verification, patient communication), what protocols it speaks (voice, structured JSON, A2A), and where to send messages.

This is the AI equivalent of a DNS lookup. It enables fully autonomous agent discovery and interaction without human-mediated integration work.

The Agent Card Format

An Agent Card is a JSON document with a standardized schema. Key fields include:

  • name and description: Human-readable identification
  • capabilities: What the agent can do (scheduling, billing, intake, etc.)
  • protocols: How it can be contacted (voice call, webhook, A2A message)
  • endpoint: The address at which the agent receives incoming interactions
  • authentication: How callers should authenticate themselves
  • compliance: What regulatory frameworks the agent operates under (HIPAA, TCPA, etc.)

The compliance field is particularly significant. An insurance company's AI system can read a dental practice agent's Agent Card and know, before initiating any interaction, that the agent operates under HIPAA and requires BAA-compliant interaction protocols. This enables automated compliance negotiation between AI systems — a capability that will become essential as AI-to-AI interactions scale.

Individual Agent Cards

Beyond the domain-level Agent Card at /.well-known/agent.json, individual agents can publish their own cards at /agents/{id}/.well-known/agent.json. This matters for platforms serving multiple clients from the same infrastructure.

A dental billing clearinghouse doesn't want to interact with "Workforce Wave" as a single entity — it wants to interact with "Smile Dental of Phoenix, practice ID 4821" as a specific agent. The individual Agent Card for that agent carries practice-specific information: the phone number, the specific services offered, the specific EHR system it connects to, the coverage for that practice's patient population.

This architecture — domain-level discovery plus individual agent cards — enables AI systems to navigate a large multi-tenant platform and interact with specific agents within it, the same way a web crawler can discover all pages on a domain but interact with individual ones.


Section 4: How MCP and A2A Work Together

MCP and A2A address different parts of the agentic AI problem. Understanding how they complement each other is important for designing AI-ready business infrastructure.

MCP is the internal tool bus. It connects AI assistants to the tools and data sources within an organization's stack. When a management AI inside a dental DSO wants to query call performance data across all practices, it uses MCP to talk to the voice AI platform's MCP server. MCP is about AI-to-tool connectivity.

A2A is the external agent network. It enables AI agents to discover and communicate with each other across organizational boundaries. When an insurance company's AI wants to verify benefits at a specific dental practice, it uses A2A to discover the practice's agent via the Agent Card, then initiates the interaction. A2A is about AI-to-AI connectivity.

A fully agentic enterprise needs both.

A Concrete Business Example

Consider a dental DSO operating 40 practices, using a Claude-based management AI for operational oversight.

Using MCP: The DSO's management AI runs a daily performance review. It uses MCP to call the voice AI platform's analytics tool, requesting intent resolution rates, escalation rates, and after-hours call volume for all 40 practices over the past 24 hours. Three practices show elevated escalation rates. The AI uses MCP to query the call transcripts for those practices. It identifies that two practices have outdated insurance information in their knowledge bases. It uses MCP to push updated insurance acceptance information to those knowledge bases. All of this happens autonomously overnight, with the DSO operations team reviewing a summary in the morning.

Using A2A: A dental benefits administrator — an AI system run by a large insurance company — needs to verify benefits for 200 patients scheduled at DSO practices over the next week. It reads the DSO domain's .well-known/agent.json to discover the platform. It then reads individual practice Agent Cards to confirm which practices serve which insurance networks. For each patient, it initiates an A2A interaction with the specific practice agent: a structured request for benefits verification for a named patient on a specific date. Each practice agent returns structured eligibility data. The benefits administrator processes 200 verifications without any human involvement on either side.

The MCP interaction is internal — the DSO's AI managing its own infrastructure. The A2A interaction is external — an insurance company's AI interacting with the DSO's agents. Both are required for the DSO to operate as a truly agentic organization.


Section 5: The Practical Implications for Businesses

Your Vendors Will Add MCP Servers

The major SaaS platforms your business uses will publish MCP servers. This is already happening: Salesforce, HubSpot, Zendesk, QuickBooks, and hundreds of others have either published MCP servers or announced them. When they do, every AI assistant your team uses will be able to interact with those systems through natural language.

The practical implication is twofold. First, you should be asking your critical vendors: do you have an MCP server? If not, what's your roadmap? If they don't have an answer, that is relevant information about how seriously they are taking the AI-native future. Second, you should be thinking about what data and operations you want AI-accessible. Not everything in your systems should be equally accessible to AI queries — access scoping in MCP (what tools each AI client is allowed to invoke) will become an important governance question.

Your Business's Phone Number Will Become AI-Callable

This is the A2A implication that most businesses haven't internalized yet. As A2A adoption grows, your business's phone number — or your AI agent's endpoint — will appear in the discovery layer that other AI systems use to find services. An A2A-registered agent at a dental practice is discoverable by insurance AI systems, referral AI systems, scheduling AI systems from other practices, and specialist coordination AI systems.

This is not a threat — it is an opportunity. The dental practice that is A2A-discoverable and can serve structured data to AI callers will process insurance verifications faster, receive referrals more reliably, and coordinate patient care more efficiently than the practice that is not. But you need to be A2A-ready before those AI callers start arriving in volume.

Evaluate Your Critical Software for MCP and A2A Readiness

When evaluating any new software vendor — and when reviewing your existing vendors — add these questions:

  • Do you publish an MCP server? What tools does it expose? Is it read-only or read-write?
  • Do you support A2A Agent Cards? At the domain level and the individual agent level?
  • What scoping controls do you provide for MCP access (so we can limit what AI systems can do)?
  • Do you have a llms.txt file at your API domain to help AI systems understand your service?

These questions will quickly reveal whether you are talking to a vendor building for the AI-native future or one that is still in the pre-agentic world.

llms.txt: The Third Standard

Alongside MCP and A2A, a third emerging standard is worth noting: llms.txt. Proposed as the AI equivalent of robots.txt, a llms.txt file at a domain's root (or at an API domain's root) provides AI systems with machine-readable information about the service — what it does, how to interact with it, what the MCP server exposes, where the Agent Cards are, and what authentication is required.

Think of llms.txt as the index that helps AI systems navigate to the specific capabilities they need. Without it, an AI system discovering your domain for the first time must infer a great deal from your web content. With it, the AI gets a structured briefing: "here is what this service does, here is how to interact with it programmatically, here is where the MCP server is, here is where the Agent Cards are."


Section 6: The WFW Implementation — Why We Built All Three

Workforce Wave has implemented MCP, A2A, and llms.txt as first-class infrastructure, not afterthoughts. The architectural reasoning behind each decision illustrates why these standards matter for any AI-forward business.

MCP at /api/v2/mcp

The WFW MCP server exposes the full operational surface of the platform. Every operation available through the dashboard is available through MCP — provisioning, knowledge base management, call analytics, compliance reporting, integration configuration. The design philosophy is that AI systems deserve the same access that human users have. Restricting AI access to a subset of operations would mean that AI-assisted management workflows can't do everything a human administrator can do. That's an artificial ceiling.

The MCP server implements scoped access: clients must authenticate and declare what scopes they need. A developer's personal AI assistant might get read-only analytics access. A DSO management AI might get read-write access to knowledge base management and provisioning. Full administrative access is reserved for authenticated human users and explicitly authorized AI systems. This scoping model is essential for governance at scale.

A2A Agent Cards at /.well-known/agent.json and /agents/{id}/.well-known/agent.json

Every WFW agent — from the platform level down to individual client agents — publishes an Agent Card. The domain-level card describes the platform: what it does, what protocols it supports, how to discover individual agents. Individual agent cards carry client-specific information: the agent's phone number, the vertical it operates in, the compliance frameworks it enforces, the integrations it supports, and the interaction protocols it accepts.

The compliance section of individual Agent Cards is particularly important for healthcare and financial services deployments. An insurance company's AI system can read a dental practice agent's card and see: "this agent operates under HIPAA, requires BAA-compliant interaction protocols, and will return PHI in encrypted form with audit logging." This automates compliance negotiation — the receiving AI knows exactly what rules govern the interaction before it begins.

llms.txt at api.workforcewave.com

The llms.txt file at the API domain provides AI systems with a structured briefing about the WFW platform. It describes the service, links to the MCP server documentation, points to the domain-level Agent Card, describes the authentication model, and summarizes what operations are available. An AI system discovering WFW for the first time — perhaps because a developer typed "I want to provision a voice agent for my new customer using Workforce Wave" into Claude — gets an immediate, accurate understanding of the platform without having to infer it from marketing pages.

The Compounding Effect

The reason WFW built all three is that they compound. An AI system can discover the platform via llms.txt, understand the Agent Card to find individual agents, and use MCP to interact with the platform programmatically. A DSO's management AI, an insurance company's eligibility AI, and a developer building a new integration all get what they need through different parts of the same infrastructure.

Each standard alone is useful. Together, they make the platform genuinely AI-native — accessible not just to human users through a dashboard, but to AI systems through the protocols that AI systems speak.


Section 7: What to Do Now

The standards formation event is underway. These are the concrete actions that forward-thinking businesses should take now.

Audit your software stack for MCP readiness. Make a list of the 5-10 software systems most critical to your operations. For each one, find out whether they have an MCP server, when they plan to publish one, and what operations it exposes. This audit will reveal which vendors are building for the AI-native future and which are not.

Evaluate your AI assistant access to your systems. If you use Claude, GPT-4o, Gemini, or any AI assistant for work, ask: what would you do differently if this AI could directly query your CRM, your scheduling system, your analytics platform? Map those use cases. Then match them to the MCP servers your vendors are publishing. The gap between your use cases and the available MCP servers is your integration roadmap.

Ask your voice AI vendor about A2A. If you have or are evaluating a voice AI platform, ask specifically: do your agents publish A2A Agent Cards? Can they serve AI callers with structured data? What does that structured data look like? The answers will determine whether your voice AI infrastructure is ready for the AI-to-AI world.

Follow the specification development. Both MCP and A2A are actively developed open standards. Anthropic maintains MCP. Google has been a lead contributor to A2A, with significant participation from Microsoft, Amazon, SAP, and others. Following the specification updates will give you advance notice of capabilities before they appear in products.

Consider your own AI discoverability. If you are building any kind of software service — a SaaS product, a professional services firm with a software component, a vertical application — think about whether you should publish an MCP server and an Agent Card. The businesses that are discoverable and interoperable with AI systems early will be easier to integrate with, more attractive to AI-forward partners, and better positioned for the compounding effects of AI adoption.


The Timing Advantage

Standards formation events reward early movers, but not infinitely early. The businesses that built for HTTP in 1992 were too early — the infrastructure wasn't there. The businesses that built for HTTP in 1995 caught the wave. The businesses that discovered the web in 2003 spent years in catch-up mode.

MCP and A2A are in their 1995 moment. The standards are stable enough to build on. Adoption is accelerating. Enterprise players are committing. The infrastructure is available. The first-mover advantage is still real.

The cost of acting now is a few days of evaluation — reading the MCP spec, asking vendors the right questions, understanding your existing stack's readiness. The cost of acting late is years of catch-up in a world where your competitors' AI systems can interact with the full surface of the business software ecosystem while yours are still waiting for custom integrations.

The choice is straightforward. The timing is now.

Share this article

Ready to put AI voice agents to work in your business?

Get a Live Demo — It's Free