The MCP Ecosystem: How a Protocol Became the Connective Tissue of Agentic AI
MCP has 97M monthly SDK downloads, 10,000+ servers, and backing from every major cloud provider. The protocol war is over. Now the real infrastructure market begins.
Executive Summary
In November 2024, Anthropic quietly published a specification that most people ignored. Sixteen months later, that specification — the Model Context Protocol — has 97 million monthly SDK downloads, more than 10,000 active server implementations, and the backing of every major cloud provider, both leading model companies, and a Linux Foundation governance structure. MCP is the first AI protocol to achieve genuine cross-vendor standardization, and it is doing so faster than almost any infrastructure standard in recent memory.
This is not a story about clever technology. The protocol itself is straightforward JSON-RPC. This is a story about timing, ecosystem dynamics, and the infrastructure markets that coalesce around successful standards. When a protocol wins, it creates a gravity well — and the companies that build the tooling around it capture more durable value than the protocol itself.
Three takeaways from this report:
MCP has effectively won the tool-integration protocol war. With ChatGPT, Claude, Gemini, Copilot, Cursor, and VS Code all shipping MCP support — plus cloud-native hosting from AWS, Cloudflare, and Azure — the question is no longer whether MCP becomes the standard. It already has. The question is who captures value in the ecosystem forming around it.
The protocol is ahead of its security infrastructure by 12–18 months. Research on 1,900+ open-source MCP servers found that 1 in 5 misuses cryptography, 5.5% have tool-poisoning vulnerabilities, and 84% of LLM agents are vulnerable to prompt injection through tool responses (ICLR 2025). The gap between "works in development" and "safe in production" is where the near-term investment opportunity lives.
The discovery, governance, and observability layers are wide open. The MCP marketplace landscape is fragmented, monetization is nearly nonexistent, and enterprise-grade audit infrastructure doesn't exist yet. These are the layers where the durable businesses will be built.
Market Overview
The Standard That Wasn't Supposed to Win This Fast
Standards battles in technology usually take years. TCP/IP took a decade to displace alternatives. REST took most of the 2000s to win against SOAP. USB-C fought a long, ugly war with competing connector standards before legislation forced convergence.
MCP's ascent has been different. From release in November 2024 to industry-standard status by early 2026 — roughly 15 months — represents a compression that's only possible when the ecosystem is already primed for the abstraction, and a trusted player releases it as an open standard.
The timing factors were real:
Agent workloads arrived faster than anticipated. By Q3 2025, enterprise AI teams were building multi-step agent workflows at scale. Custom tool integrations were breaking under the combinatorial complexity of N agents × M tools × P auth schemes. The pain was acute and widely shared.
MCP shipped at the right abstraction level. It doesn't try to solve orchestration, memory, or planning — it solves one specific problem (how do AI systems discover and call external tools?) with a clean, implementable specification. The scope restraint is why it spread.
Anthropic's credibility carried the initial adoption. Claude's user base tried MCP first. Once Cursor, Zed, and VS Code shipped native support, the developer community was already building servers before competing model companies had to decide whether to join or compete.
The governance shift in December 2025 locked in the outcome. By donating MCP to the Linux Foundation's newly formed Agentic AI Foundation — co-founded with Block and OpenAI, backed by Google, Microsoft, AWS, Cloudflare, and Bloomberg — Anthropic transformed a company specification into neutral industry infrastructure. Competing with MCP now means competing with an industry coalition. That's a fight nobody wants to pick.
What MCP Actually Does
The protocol solves a specific, bounded problem: enabling AI clients to discover and invoke external capabilities in a structured, permissioned way.
The architecture has three components:
Clients are AI-facing applications: Claude Desktop, ChatGPT, Cursor, VS Code, and Copilot Studio. They expose an MCP interface and manage the connection to servers.
Servers implement specific capabilities: querying a database, posting to Slack, reading a file system, and calling a REST API. A server can expose three types of primitives — Tools (callable functions), Resources (readable data), and Prompts (templated instructions).
The protocol handles discovery (how a client learns what a server can do), invocation (how it calls a specific tool), and transport (currently stdio for local connections, HTTP/SSE, and Streamable HTTP for remote connections). Authentication is deliberately left to the implementer, which is both a design choice and the source of most of MCP's current security problems.
Before MCP, building an AI agent that could use five external tools required five custom integrations, each with its own auth handling, error model, and schema. MCP collapses that to one protocol and five servers. The simplification is real, and it's why adoption happened faster than anyone predicted.
The Stack: Layer-by-Layer Breakdown
Layer 1: SDKs & Core Frameworks
The official Anthropic SDKs for Python and TypeScript are the starting point, but the ecosystem has already grown a second generation of higher-level frameworks that abstract protocol boilerplate and add developer experience improvements.
Manufact is the most interesting bet in this layer. Their $6.3M seed from Peak XV (formerly Sequoia India/SEA) in February 2026, with YC and Supabase angels participating, signals institutional conviction in the "picks and shovels for MCP" thesis. Their framing — "like Stripe measuring the global GDP, we're measuring global tool calls made by agents" — is the right lens. If MCP becomes the TCP/IP of agent-tool communication, the SDK layer captures transaction-level data that compounds into a durable data moat.
The risk: Cloudflare, Vercel, and Docker have all launched competing MCP hosting and tooling. Manufact's answer is developer experience and open-source distribution. Whether that creates sufficient differentiation from cloud provider bundles is the open question.
Layer 2: Hosting & Remote Execution
Local MCP servers (running via stdio on a developer's machine) are fine for prototyping. Production agents need remote, scalable, persistent server infrastructure — and this is where the market is forming fastest.
Cloudflare's April 2025 announcement — the industry's first remote MCP server hosting — was a significant market-shaping move. Their pitch: global network (200+ cities), 50ms latency to 95% of internet users, Durable Objects for stateful persistence, and a free tier that removes cost friction from early adoption. Matthew Prince's quote at launch — "Cloudflare is the best place to build and scale AI agents. Period." — was intentionally aggressive.
The cloud-native platforms are a real threat to pure-play MCP hosting startups. The counter-argument is that developer tooling markets often support a specialized player even after cloud giants enter — the same way Vercel grew after AWS Amplify launched. Speed of iteration, better defaults, and first-mover developer relationships are defensible, at least through the current adoption phase.
Layer 3: Discovery & Marketplaces
With 10,000+ public MCP servers and that number growing daily, discovery is increasingly the bottleneck. How does a developer find the right server for their use case? How does an enterprise evaluate whether a server is trustworthy enough to deploy? How do server authors get distribution?
The current landscape is fragmented, undermonetized, and ripe for consolidation:
The discovery layer is the most obvious whitespace in the MCP ecosystem. The parallel to app stores or package registries is instructive: npm, PyPI, and the iOS App Store all created enormous value not from the packages themselves but from distribution, trust, and the economic infrastructure around them.
No one has built the "App Store" for MCP servers — a platform that combines quality curation, security scanning, compatibility testing, support SLAs, and monetization infrastructure. MCP Marketplace has the revenue model (85/15 rev-share) but not the scale. Smithery has scale but is still primarily a catalog. The opportunity is real.
Layer 4: Security
Security is where MCP's explosive growth has created the sharpest mismatch between adoption and maturity. The protocol was designed for interoperability, not security. Authentication is delegated to implementers. Authorization is undefined. Audit logging is absent at the protocol level. And every MCP server is, by definition, a gateway through which an AI agent can take actions in the real world.
The Threat Taxonomy
Prompt injection via tool responses is the most pervasive threat. When an AI agent calls an MCP tool — fetching an email, reading a CRM record, pulling a GitHub issue — the content returned by that tool lands in the agent's context window. If that content contains embedded instructions, the agent may follow them. The attack requires no access to the agent system itself; attackers only need to control something the agent will read.
Research presented at ICLR 2025 found that 84% of LLM agents are vulnerable to such attacks, with 100% success rates on some undefended models using mixed attack strategies. OWASP ranks indirect prompt injection as the #1 threat for LLM applications in 2025.
Real-world attack patterns documented across common MCP integrations:
Tool poisoning is the second major vector. Unlike prompt injection (which exploits what tools return), tool poisoning attacks the tool definitions themselves. A malicious server can present benign-looking tool descriptions to users during an approval step, then silently alter the implementation after approval — a pattern called "rug pull." Research on ~1,900 open-source MCP servers found 5.5% showed tool-poisoning risks.
Supply chain attacks compound the problem. Malicious MCP servers distributed via GitHub have been documented spreading through developer environments autonomously — the equivalent of a supply chain worm in the npm/PyPI tradition, but with the added consequence that these servers have agentic action capabilities, not just code execution.
Authentication and authorization gaps are structural. A study of enterprise MCP deployments found that Personal Access Tokens (PATs) with excessive permissions are the dominant auth mechanism. There is no standardized multi-tenant auth, no fine-grained authorization framework, and no built-in audit trail. The MCP C# SDK v1.0 introduced incremental scope consent and OAuth 2.1 improvements in late 2025, but production-grade enterprise auth remains a configuration exercise, not a protocol guarantee.
The security category is where the most acute unmet demand exists. Lakera's $20M Series A (2024) and StackOne's $20M Series A from GV in February 2026 both validate the market. But neither is a pure MCP security play — they address prompt injection more broadly. The purpose-built MCP security layer remains an open opportunity.
The investment thesis in one sentence: The first company to ship a production-grade MCP security gateway — covering input sanitization, tool definition integrity verification, output filtering, audit logging, and compliance reporting — will find a market of enterprises that cannot proceed without it.
Layer 5: Observability & Debugging
When an agent running a 20-step workflow produces a wrong result, diagnosing which MCP tool call failed, what data it returned, and how the agent reasoned about that data is painful. There is no native tracing, no replay, no structured logging at the protocol level.
The general-purpose AI observability players (LangSmith, Arize, Braintrust, Langfuse) provide some visibility into agent traces, but MCP-specific tooling — tools that understand the client-server structure of MCP, can replay specific tool calls, and can diff tool responses across runs — doesn't exist yet.
The whitespace: a purpose-built MCP observability platform that treats the client-server interaction as the primary unit of analysis — with tool call replay, response diffing, latency attribution, and anomaly detection on tool behavior over time. General-purpose LLM observability doesn't cover MCP-specific failure modes (tool schema drift, server-side behavioral changes, auth expiration mid-workflow).
Layer 6: Enterprise Governance & Compliance
The governance problem is distinct from the security problem. Security asks: "Is this tool call safe?" Governance asks: "Who authorized this tool call, what policy framework did it operate under, can I audit it for SOC2/GDPR/HIPAA compliance, and can I replay it for a regulator?"
No product answers all of those questions today. The pieces exist in various states of maturity:
Approval workflows — agents need human-in-the-loop gates for high-risk actions; basic versions exist in LangGraph and LangSmith but are not compliance-grade
Audit trails — protocol-level logging is absent; every implementation rolls its own
Policy engines — what tools can which agents call, under what conditions? No standardized answer
Multi-tenant isolation — in enterprise environments with multiple teams and agents, tool access needs scoping; largely unsolved
The market here follows regulated industries. Financial services, healthcare, and legal are the initial buyers — anywhere that agent actions must be explainable, auditable, and defensible. The wedge product is likely an enterprise MCP gateway that enforces policy, generates compliance reports, and provides approval workflows. Think PagerDuty or Okta but for agent operations.
Protocol Dynamics: MCP vs. A2A vs. ACP
MCP is not the only protocol in play. Understanding the protocol landscape matters because these are not competing standards — they're complementary layers in a multi-agent architecture.
The emerging architecture for sophisticated multi-agent systems: A2A or ACP handles agent-to-agent coordination (Agent A delegates a task to Agent B), while MCP handles agent-to-tool connectivity (Agent B uses MCP to call the specific tools it needs). The protocols are complementary.
MCP's governance advantage is structural. Operating under the Linux Foundation means it's vendor-neutral in a way that A2A (Google) and ACP (IBM) cannot currently claim. Enterprises evaluating protocol bets will weight this heavily.
Investment Activity
Funding Rounds: MCP Ecosystem (2024–2026)
For context, broader agentic AI infrastructure funding in the same window:
Active Investors
Peak XV (Sequoia India/SEA) — Led Manufact seed; thesis: "every software product on earth will need an interface designed for AI agents"
South Park Commons — Early MCP bet via Smithery; consistent early infrastructure plays
Y Combinator — Multiple MCP-adjacent companies in S25 cohort (Manufact, mcp-use, others)
GV (Google Ventures) — Led StackOne Series A specifically for MCP security; direct strategic interest from Google's MCP adoption
a16z — Temporal's $300M Series D; explicit "AI agent infrastructure" thesis (Yoko Li, Martin Casado)
Valuation Context
MCP-specific infrastructure is too early for reliable public comp data, but general agentic AI infrastructure multiples provide framing:
The premium over public comps reflects growth rates and category-creation potential. Expect compression as the market matures, which is typical by Series C/D for any infrastructure category.
The Open Questions
What's Still Unsolved
Authentication and authorization at enterprise scale. The MCP spec defines how to call a tool. It does not define who can call it. Enterprise deployments with hundreds of agents, dozens of tools, and multiple teams need fine-grained access control, just-in-time provisioning, audit trails for every tool invocation, and revocation workflows. The foundation is working on OAuth 2.1 improvements in the roadmap, but production-ready enterprise auth is still being assembled piecemeal.
Dynamic tool discovery at scale. With 10,000+ public servers and growing, the current discovery model (browse a directory, manually configure a client) doesn't work at enterprise scale. Agents need to dynamically discover relevant tools from large catalogs — essentially RAG for tool selection. The MCP roadmap includes this, but implementations are absent.
Cross-agent tool-access governance. In a multi-agent system (Agent A orchestrates Agents B through D), each sub-agent may need different tool access scopes. Who authorizes what? How is that policy enforced when agents spawn sub-agents dynamically? This is unsolved architecture.
Semantic versioning and compatibility guarantees. MCP servers can change their tool schemas. An agent that depended on a specific input schema breaks silently when the server updates. There's no contract enforcement, no semantic versioning enforcement, no compatibility testing infrastructure. At scale, this creates brittle agent pipelines.
Monetization for server authors. Manufact has 5M+ SDK downloads and zero revenue. The open-source model that drives MCP adoption creates a monetization paradox: the best way to get adoption is to be free, but free doesn't support the infrastructure investment needed to make servers production-grade. Only MCP Marketplace (85/15 rev-share) and Apify have live payment infrastructure for server authors.
Opportunities & White Space
The Three Most Defensible Positions
1. Enterprise MCP Gateway / Governance Layer
The missing control plane. A product that sits between enterprise AI systems and MCP servers, enforcing who can call what under which policy, generating compliance-grade audit trails, providing approval workflows for high-stakes tool calls, and reporting for SOC2/GDPR/HIPAA.
The wedge is regulated industries — financial services, healthcare, legal — where the absence of this layer is a hard blocker to production deployment. The revenue model is per-agent-action pricing plus an enterprise subscription for compliance reporting. The moat is the policy library and audit data that accumulate over time and become increasingly difficult to replicate or migrate.
This is the "Okta for agent tool calls" position. Okta didn't build the identity provider — they built the governance and federation layer on top of it. The same architecture applies here.
2. Curated MCP Marketplace with Trust Infrastructure
The current MCP discovery ecosystem is a catalog, not a marketplace. The difference: a catalog lists things, a marketplace provides trust signals, quality guarantees, and economic infrastructure.
The product: security-scanned and continuously re-scanned MCP servers, compatibility testing against major client versions, support SLAs for enterprise servers, and monetization infrastructure (usage-based billing, license enforcement, rev-share for server authors). Distribution via direct integration with Cursor, VS Code, Copilot Studio, and Claude Desktop.
The model is Cloudflare Apps or the Okta Integration Network: you don't build all the tools, you own distribution and trust, and you take a cut of the economic activity flowing through the marketplace. At scale, this is a high-margin infrastructure business with natural network effects — more servers attract more clients, which attract more server authors.
3. MCP Security Gateway
Distinct from the governance play, which is about policy and compliance. This is about real-time threat prevention: scanning tool responses for prompt injection before they enter the agent context, verifying tool definition integrity against a signed registry (preventing rug-pull attacks), rate-limiting and anomaly detection on tool call patterns, and blocking tool outputs that match exfiltration signatures.
The technical architecture is a proxy layer — every MCP tool call passes through the security gateway, which applies rules and ML models in the critical path. The business model is per-call pricing, similar to how API security gateways (Kong, Apigee, AWS API Gateway) monetize.
The market entry point is enterprises that are already deploying agents and have experienced incidents, or whose security teams are blocking production deployment until threat controls exist. A single prevented incident (agent forwards customer data to attacker via Gmail MCP) justifies months of gateway costs.
Three Other Startup Opportunities
4. MCP Observability & Debugging Platform
Purpose-built for MCP's client-server model: trace individual tool calls across the full workflow, replay specific calls against current or historical server states, diff tool response schemas across versions, attribute latency to specific servers, and detect behavioral drift in server responses over time. General-purpose LLM observability doesn't solve these MCP-specific failure modes.
5. Vertical MCP Server Suites
Generic MCP servers (Slack, GitHub, Gmail) are already being commoditized. Deep domain expertise packaged as MCP servers — for legal discovery, medical billing codes, industrial sensor networks, financial compliance data — will command premium pricing. The protocol is horizontal; the value capture is vertical. The model: a SaaS subscription for access to a high-quality, maintained server suite for a specific enterprise software category, with SLAs and enterprise auth built in.
6. Agent Testing Environments
Testing agents against live production systems is expensive and risky. A platform that provides high-fidelity synthetic MCP servers — fake CRMs, simulated email inboxes, synthetic web content, mock financial systems — lets teams stress-test agents safely before production deployment. Think of it as Playwright for multi-step agent workflows, but where the "browser" is a suite of realistic MCP servers. The technical barrier is high (convincing fakes of Salesforce, ServiceNow, SAP), which creates defensibility once built.
The Consolidation Question
What's Commoditizing
Basic MCP SDKs and server frameworks. FastMCP, the official SDKs, and the growing list of framework generators are already commoditized. Writing the boilerplate for a new MCP server takes hours. By 2027, it will take minutes with AI-assisted tooling. Server framework choice will be a DX preference, not a strategic decision.
Single-purpose MCP connectors. An MCP server that wraps one API (a Slack server, a GitHub server) has no moat. These will be free, community-maintained, and abundant. Stainless and Speakeasy's API-to-MCP-server generators are already accelerating this commoditization. The value is not in individual connectors — it's in the curated, governed, enterprise-grade suites.
Simple directory/catalog plays. mcp.so, the GitHub registry, and similar catalogs provide value today but are not businesses. They're infrastructure that will be consumed by whoever builds the marketplace layer.
What Has Durable Moats
Policy and audit data. An enterprise that has deployed an MCP governance platform for six months has compliance audit trails, approved tool lists, policy rules, and incident history that is genuinely difficult to migrate. The switching cost isn't the software — it's the accumulated policy and audit corpus.
Security signatures and threat models. Like endpoint security vendors, MCP security gateways improve with every attack they see. A threat model trained on production MCP traffic across hundreds of enterprises will systematically outperform a new entrant. This is the Lakera/CrowdStrike model applied to agent tool calls.
Marketplace trust signals. Once server authors build reputation and reviews on a specific marketplace, and once enterprises build deployment workflows around it, switching is painful for both sides. The App Store dynamic: the platform captures value not through technical superiority but through accumulated trust infrastructure.
Vertical domain expertise. A legal discovery MCP server suite that encodes how legal teams actually work — what data sources matter, what the auth and compliance requirements are, what formatting lawyers expect — can't be replicated by a generic tool. Domain knowledge is the moat.
Acquisition Targets
The most likely acquisition targets through 2026-2027, and the acquirers:
Conclusion & What to Watch
MCP is past the inflection point. The governance is settled (Linux Foundation), the major platforms have committed (OpenAI, Google, Microsoft, AWS), and the developer ecosystem has voted with 97 million monthly downloads. The protocol war is over.
What comes next is the infrastructure build-out — and the companies that build the security, governance, observability, and marketplace layers around MCP will capture more durable value than the protocol itself. This is the same pattern that played out in every prior infrastructure standard: the real money wasn't in TCP/IP, it was in the load balancers, firewalls, and CDNs. It wasn't in REST, it was in the API gateways and management layers. MCP is no different.
The window for first-mover advantage in the governance and security layers is 12–18 months. After the first major public incident — an agent exfiltrating sensitive data, a tool-poisoning attack causing financial damage, a rogue agent taking unauthorized action at enterprise scale — the security and governance categories will get 12–18 months of acceleration overnight. That incident is not a matter of if; given that 84% of deployed agents are currently vulnerable, it is a matter of when.
Watch closely through 2026:
MCP Dev Summit (April 2-3, NYC) — First major ecosystem conference under Linux Foundation; governance roadmap for auth and authorization will be previewed here; expect startup announcements
Enterprise security incidents — The first documented MCP-based breach at a named company will be the category-defining moment for MCP security vendors; monitor CISA advisories and security research
Manufact's path to monetization — 5M+ downloads and zero revenue is unsustainable; their 2026 trajectory (targeting $2-3M ARR) will reveal whether the "Vercel for MCP" model can actually capture value from open-source distribution
Google A2A + MCP convergence — As multi-agent systems mature, the interaction between A2A (agent-to-agent) and MCP (agent-to-tool) will define the architecture of production agentic AI; watch for joint reference implementations
Cloud provider bundling — AWS, Cloudflare, and Azure are all building MCP hosting. If they bundle governance and security tooling, it will compress the standalone market for those layers. Watch their product announcements closely
The infrastructure layer beneath MCP is not a sideshow. It's the foundation of every agent-native workflow built in the next decade.
AI Primitives is a weekly deep-dive on AI infrastructure for builders and investors. Issue #2, March 2026. primitivesai.substack.com

