Frontier hub

Agent Protocols — MCP, A2A, NLWeb, WebMCP

Agent Protocols is the AAIO Frontier category covering machine-readable capability declarations and inter-agent communication standards: MCP (the de facto standard with 10,000+ public servers and universal vendor support), A2A (Google, 150+ partners), NLWeb (natural-language discovery), and WebMCP (browser-side MCP). Sites that ship even one MCP server today get measurable agent traffic; the ones that don't are invisible at this layer.

By Chris Mühlnickel · 2026-05-04

What is Agent Protocols?

The Frontier category covering machine-readable capability declarations and inter-agent communication standards — MCP, A2A, NLWeb, WebMCP — that let AI agents discover, authorize, and call your services.

By the numbers

Why it matters

MCP is the standard. When a single protocol earns universal adoption across competing model vendors, IDEs, and platforms in 12 months, that's not "an emerging standard" — it's the standard. The closest historical analogue is OAuth, which took years to consolidate. MCP did it in one. The strategic question for any service that has callable actions is no longer "should I support MCP?" — it's "what's our MCP server's coverage and quality?"

Tool quality matters more than tool count. This is the most common mistake we see in our calibration corpus: SaaS vendors ship 50-tool MCP servers with sparse, internally-named tool descriptions, then wonder why agents don't pick them. Agents read tool descriptions at runtime to decide what to call; if your description was written for a developer reading docs (createInvoice — creates an invoice), the agent has nothing to reason from. The fix is to rewrite tool descriptions for the LLM-as-reader: when would you use this tool, what are the inputs, what does success look like. Eight excellent tools beat fifty mediocre ones.

A2A complements MCP rather than competing with it. Google's framing is precise: "Build with ADK, equip with MCP, communicate with A2A." The protocols compose. A customer-service agent on Platform A identifies an issue, delegates to a billing agent on Platform B via A2A, hands off to a payments agent on Platform C — all of them calling tools via MCP. The future agent web is not single-vendor. Sites that bet on MCP are betting on the right primitive; sites that also publish A2A Agent Cards are positioning for the multi-agent flow.

Authorization is the hardest part — and the most commonly failed. Most MCP servers we see ship with one of two patterns: (1) no auth at all (anyone can call any tool), or (2) a single shared API key (one credential breach compromises every user's data). Both create more risk than value. The right pattern is per-agent OAuth scopes, capability-level permissions, audit logs, and per-tool rate limits. Spekto tracks MCP Authorization (F-MCPA) as its own Frontier watcher because the failure mode is silent — the server works, agents call it, until one day a bad actor does too.

NLWeb and WebMCP are earlier-stage but high-leverage in narrow contexts. NLWeb works if you have good Schema.org markup already (most don't); WebMCP works if your value is browser-anchored rather than API-anchored. Neither is a universal "ship this now" move. But both compound on existing investments — NLWeb on Schema.org, WebMCP on a working web app.

Sub-topics

Frontier watchers (tracked, not yet scored)

  • F-MCP MCP Server presence — Does your service publish an MCP server discoverable via the public directory and your llms.txt?
  • F-MCPQ MCP Tool Quality — Are your tool descriptions written for an LLM-as-reader, with clear when-to-call signals, sane parameter schemas, and predictable error states?
  • F-MCPA MCP Authorization — Does your server use per-agent OAuth scopes, capability-level permissions, audit logs, and rate limits — rather than a shared static API key?
  • F-A2A A2A Agent Card — Do you publish a /.well-known/agent-card.json that other agents can discover?
  • F-NLW NLWeb Support — Does your site expose a natural-language query layer over its Schema.org markup?
  • F-WMCP WebMCP Registration — Does your web app declare WebMCP-compatible tools an in-browser agent can call?

Where it's heading

MCP authorization spec maturing. OAuth-for-agents patterns, scoped tokens, capability tickets, and time-limited delegations are in active development across the Anthropic, OpenAI, and Google MCP working groups. Expect a settled standard for "how does an agent prove it has user consent for this specific MCP tool call" by late 2026.

A2A directory + agent-card discovery becoming load-bearing. The 150+ A2A partner roster is mostly enterprise — Salesforce Agentforce, ServiceNow Now Assist, SAP Joule, Atlassian Rovo. Cross-platform multi-agent flows (a Salesforce agent talking to a ServiceNow agent talking to a Stripe agent) are the use case A2A was designed for, and they're shipping in 2026.

NLWeb-style discovery layered over MCP. The natural extension is for NLWeb to cover not just static content but MCP capability schemas — an agent asks a website "what can you do for me?" and gets back a structured answer with both retrievable content and callable tools.

Agent app stores consuming MCP as first-class. ChatGPT Plugins reborn as the Apps SDK, Claude Skills, OpenAI's Apps directory, and (likely) Gemini's equivalent are all converging on MCP as the underlying capability standard. The discovery surface (the agent app store) and the protocol (MCP) decouple, which is the right shape.

Convergence pressure between MCP and OpenAI's Apps SDK. Both started as separate efforts (MCP from Anthropic, Apps SDK from OpenAI's Plugins lineage); both now overlap heavily. Expect a shake-out in 2026-2027 toward MCP as the protocol with Apps SDK as one consumption surface among several.

When to use which. MCP connects agent → service, best for API-callable services, universal adoption as of April 2026, medium effort to ship — the first move for most services. A2A connects agent → agent, best for multi-agent flows, 150+ enterprise adopters, low effort (just publish an Agent Card JSON) — ship if your service participates in cross-vendor agent flows. NLWeb connects agent → web content, best for sites with strong Schema.org already in place, early adopters only, low effort if Schema.org is in place — ship if you're content-heavy. WebMCP connects agent → browser DOM, best for browser-anchored apps, Chrome early preview, medium effort — ship if your product's value lives in the browser rather than behind an API.

Common mistakes

  • Shipping an MCP server with no authorization. The single most common failure pattern; the security team should review before launch.
  • Tool descriptions copied from internal API docs. Written for developers, not for LLM-as-reader. Rewrite them.
  • 100 tools when 8 would do. Tool quality > tool count. Cull aggressively.
  • OpenAPI-as-MCP without re-thinking the schema. Generated MCP from OpenAPI is a starting point, not a finishing point.
  • Treating A2A as urgent for non-multi-agent products. A2A matters when your service participates in multi-agent flows. If you're a single-tenant SaaS being called by ChatGPT, MCP is enough.

Frequently asked

What is MCP and do I need to ship one?

MCP (Model Context Protocol, Anthropic, November 2024) is the open standard for connecting AI applications to external tools, data, and workflows. Anthropic's framing: 'a USB-C port for AI applications.' If your service has actions an agent could take on a user's behalf — booking, search, retrieval, file operations, anything callable — you should ship an MCP server. As of April 2026, MCP has 97 million monthly SDK downloads, 10,000+ public servers, and universal support across Claude, ChatGPT, Gemini, Microsoft, VS Code, and Cursor. Not shipping one means agents inside those clients can't reach your service.

What's the difference between MCP and A2A?

MCP handles agent-to-service: how an agent calls your tools and reads your data. A2A handles agent-to-agent: how two agents from different vendors discover each other and coordinate on a task. Google frames the stack as 'Build with ADK, equip with MCP, communicate with A2A' — the three layers compose. A2A uses Agent Cards (JSON metadata at /.well-known/agent-card.json) for capability discovery; MCP uses Tool schemas for call-time discovery. They're complementary, not competing.

How do I authorize an MCP server safely?

Authorization is the hardest part of shipping MCP — most servers ship with weak auth and become liabilities rather than assets. Spekto tracks MCP Authorization (F-MCPA) as its own watcher because of how often this fails. The right pattern: scoped OAuth tokens per agent, capability-level permissions (an agent can read invoices but not delete them), audit logs of agent actions, and rate limits that don't break legitimate usage. The wrong pattern: shipping a server with a static API key shared across all agents.

What's NLWeb? Is it competing with MCP?

NLWeb is a natural-language-discovery layer for the web, built by R.V. Guha (the creator of Schema.org). It transforms websites into conversational interfaces queryable by humans and agents, layered over existing Schema.org markup. NLWeb co-exists with MCP — Slobodan Manic's framing is 'NLWeb is to MCP/A2A what HTML is to HTTP.' If you already have good Schema.org markup, NLWeb is a small additional step; early adopters include Eventbrite, Shopify, Tripadvisor, O'Reilly, Common Sense Media, and Hearst.

WebMCP — is that a Microsoft thing? Should I care?

WebMCP is from Google, not Microsoft. It's a browser-side variant of MCP, shipped as an early preview in Chrome 146 (February 2026). Where server-side MCP runs as a separate service that agents call over HTTP/stdio, WebMCP lets a website declare MCP-compatible tools that an in-browser agent can call directly from the page context — no DOM scraping. Two APIs: a declarative one for HTML-form-style actions and an imperative one for JavaScript-driven complex flows. If your product is browser-anchored (e-commerce, dashboards, content tools), WebMCP is worth tracking; if your value lives behind an API, server-side MCP is the higher-leverage move.

Can my existing OpenAPI spec become an MCP server?

Mostly, yes. Several open-source generators turn an OpenAPI spec into a working MCP server. The catch: OpenAPI was designed for human developers; MCP tool descriptions are read by LLMs at call-time. A well-named OpenAPI endpoint with a generic description (createInvoice — creates an invoice) makes a poor MCP tool (LLMs need to know when to call it, not just what it does). Plan to rewrite tool descriptions for the LLM-as-reader audience even if you generate the scaffold from OpenAPI.

How do I list my MCP server publicly so agents discover it?

Three steps. (1) Publish to the public MCP server directory at modelcontextprotocol.io/servers (or a comparable registry). (2) Add a link to your llms.txt at site root. (3) Mention it in your developer docs and OpenAPI spec metadata. Agents that discover services via NLWeb / Schema.org will pick up properly-marked-up MCP server references; agents inside vendor surfaces (ChatGPT, Claude, Gemini) consume the public directories.