Two years ago, every major AI platform was racing to build its own "plugin store." OpenAI had Plugins. ChatGPT had GPTs. Vendors built dozens of bespoke connectors. The pattern was familiar — proprietary marketplaces, each with its own API contract, each one growing in isolation.
Then Anthropic shipped Model Context Protocol (MCP) in late 2024, and within twelve months the entire shape of the AI-tools industry changed. Plugin stores stalled. MCP servers proliferated. Cursor, Zed, Continue, and even tools from competing AI vendors all adopted MCP as their integration layer.
This article is an opinion piece — but a documented one. Why did MCP win this round so quickly, and what does it tell us about where AI tooling is heading?
The plugin-store era and its limits
The original idea behind AI plugins was reasonable: give the LLM a way to call external services through a marketplace-curated interface. Users discover plugins, install them with a click, and the chatbot gains new capabilities.
But the model had four structural problems.
1. Each vendor owned the marketplace. A plugin built for ChatGPT could not run in Claude. A GPT built in OpenAI's ecosystem was locked there. Every plugin author had to pick a horse and bet on it.
2. Plugins were tied to a specific UI. They lived inside the vendor's app. They could not be invoked from your IDE, your terminal, your CI pipeline, or your custom agent — only from inside the chat product that hosted them.
3. Discovery was centralised. The marketplace gatekeeper (the AI vendor) decided which plugins were visible, what guidelines they followed, what fees applied.
4. The economics did not work for tool authors. Building a plugin meant building for one ecosystem, hoping that ecosystem grew, and accepting whatever distribution terms the platform offered.
Meanwhile, every developer building an AI feature for their own app was hand-rolling tool integrations from scratch. The same Slack-connector logic was being reimplemented inside dozens of products.
The ecosystem was hungry for a different shape.
What MCP did differently
Anthropic published the Model Context Protocol as an open specification in November 2024. The choices that shaped its trajectory:
- No marketplace, no gatekeeper. Anyone can publish an MCP server to npm or PyPI; anyone can install one with a single config line. Distribution is npm, GitHub, pip — infrastructure that already works.
- Transport-agnostic. stdio for local, Streamable HTTP for remote. Run a server on your laptop, on a corporate VM, on Cloudflare Workers, on a Kubernetes pod — same protocol, same client compatibility.
- LLM-agnostic. The server does not know whether Claude, GPT-4o, Llama, or a local Mistral model is on the other end. The protocol is the contract; the LLM is interchangeable.
- Plain JSON-RPC under the hood. A protocol any developer could implement from scratch in an afternoon. No SDK lock-in, no proprietary blob.
The combination produced something the plugin era had not: a shared standard with no owner.
What changed in twelve months
The adoption curve speaks for itself.
| Quarter | Milestone |
|---|---|
| Q4 2024 | MCP spec published; reference server SDKs in TypeScript and Python |
| Q1 2025 | Claude Desktop ships first-class MCP support; first community servers appear |
| Q2 2025 | Cursor adds MCP integration; ecosystem crosses 100 public servers |
| Q3 2025 | Zed, Continue, Sourcegraph Cody adopt MCP |
| Q4 2025 | Streamable HTTP transport stabilises; first hosted MCP servers go live |
| Q1 2026 | OpenAI Agents SDK adds MCP client support; LangChain ships native MCP adapters |
| Q2 2026 | Hundreds of open-source MCP servers exist for major SaaS; "MCP-first" appears in job descriptions |
Meanwhile, OpenAI Plugins were deprecated. GPT custom-actions saw stagnant growth. The vendor-specific plugin pattern, less than two years old, was already legacy.
Why MCP won so fast
Four structural reasons explain the speed.
1. It was a protocol, not a product
Protocols win when there is a clear M×N integration problem. M AI clients × N external systems = M×N bespoke integrations. MCP collapsed that to M+N. Every AI client speaks MCP once; every external system is wrapped in an MCP server once. The combinatorial explosion goes away.
The same shape underlies the wins of HTTP, SMTP, USB, OAuth 2.0, and a hundred other standards. Once enough players coordinate around a protocol, marketplace-style lock-in becomes economically dominated.
2. Anthropic gave up control
The single most important decision: Anthropic published MCP as an open spec with no proprietary hooks. No license fees. No certification process. No "approved by Anthropic" stamp. The protocol lives in a GitHub repo with a Working Group, not a vendor portal.
This matters because it neutralised the obvious objection — "why would we adopt a standard owned by a competitor?" Cursor was happy to adopt MCP because there was no Anthropic-specific tax to using it. Zed could implement an MCP client without Anthropic's approval. The protocol is just JSON-RPC over stdio or HTTP, both fully documented, both implementable from scratch.
When Anthropic gave up the right to gatekeep, every potential competitor had cover to adopt.
3. It composed with what already worked
MCP is not a clean-sheet rewrite of how software talks to AI. It is a thin layer on top of patterns developers already knew — JSON-RPC, JSON Schema, stdio subprocesses, HTTP. There was no "learn a new SDK" tax. A Node developer could build their first MCP server in 25 lines because every concept (transport, message format, schema validation) was already familiar.
Contrast with proprietary plugin SDKs that required learning bespoke manifest formats, sandbox runtimes, and platform-specific tooling.
4. The economic flywheel kicked in early
Because MCP servers worked across many AI clients, building one had higher ROI than building a single-platform plugin. Open-source authors prefer high-leverage targets. Within months, every popular SaaS had at least one community MCP server. That deepened the moat for every host that adopted MCP — they got the entire ecosystem for free.
Meanwhile, vendor-specific plugin stores had to commission, curate, and host every integration themselves. The asymmetry was structural.
What this means for developers
If you are building anything that uses LLMs as part of a product or workflow, three implications follow.
Tooling should be MCP-shaped by default. When you build a connector to an external system, build it as an MCP server. You will get to use it across Claude Desktop, Cursor, your custom agent, and any future host without rework.
Lock-in is moving up the stack. The proprietary differentiator is no longer the plugin ecosystem — it is the LLM quality, the agent orchestration logic, the UX. AI vendors will compete on those layers, not on capturing tool authors.
Internal tooling is the killer use case. The biggest payoffs from MCP are not third-party SaaS connectors — those existed in plugin form too. The unique value is wrapping your own internal services, your own databases, your own dashboards. An MCP server in front of an internal admin API gives every AI assistant in your company access in a way that previously required per-app integrations.
What MCP does not (yet) solve
It is worth being honest about the gaps.
- Auth standardisation is incomplete. OAuth 2.0 support in Streamable HTTP is converging but still maturing. For now, most servers use API keys passed via
envvariables, which works but is awkward for multi-tenant deployments. - No payments layer. AI plugins had marketplace-mediated billing. MCP has none — tool authors monetise via the underlying service, not the protocol.
- Discovery is informal. There is an MCP registry, but no single canonical "AppStore for MCP." You find servers via GitHub search, npm, blog posts. This is healthy in open-source spirit but mildly annoying when you just want to know "is there an MCP server for X."
- Quality varies. Anyone can publish anything. The user is responsible for evaluating each server's safety, reliability, and trust.
None of these are fatal. Each is being worked on. But it is worth knowing they are not yet solved as you build.
The bigger picture
What happened with MCP is the standard arc of platform evolution. Vendor-specific stores arise first because someone needs to ship value fast. Then a protocol emerges and the value moves into the protocol layer. Plugin stores were AOL keywords; MCP is HTTP.
The winners in the next phase will be the products that build on top of the MCP layer — better AI assistants, better orchestration tools, better domain-specific agents — rather than the products that try to own a slice of the connector layer itself.
If you are still betting on vendor-locked plugin marketplaces as your AI integration strategy in 2026, you are betting against the structural force that has consistently dominated platform evolution for thirty years. Standards win. They always win. The only question is how long the proprietary phase lasts.
Conclusion
MCP is not eating AI plugins because it is technically miraculous. It is eating AI plugins because it is the protocol-shaped solution to a protocol-shaped problem. M×N to M+N, no gatekeeper, transport-agnostic, runtime-agnostic, with adoption from Anthropic's strongest competitors as the closing argument.
The lesson is not "MCP is the future" — though that is also true. The lesson is that every time the AI industry produces a coordinated standard, vendor lock-in evaporates in the layer below it. The same dynamic will play out again at the agent layer, at the memory layer, at the safety layer. MCP is just the first instance of the pattern.
Build on the standard. Build on top of the standard. Do not build around it.
Try it yourself
A practical demonstration of the protocol-over-plugin argument — the same MCP server, queried from three different hosts, behaves identically:
search_issuesFound 12 open issues from the past week mentioning memory leak. Top 3 by reactions:• node/node#52831 — Memory leak in cluster worker after IPC disconnect (87 👍)
• vercel/next.js#69412 — Memory leak in dev server when editing layout.tsx (52 👍)
• oven-sh/bun#11203 — Memory leak in Bun.serve with WebSocket connections (38 👍)
Want me to pull the full description of any of them?
search_issues(Identical result. Same 12 issues. Same ordering. Different host, identical tool.)One MCP server. Two hosts. Same answer. That is the whole pitch — and the structural reason vendor-locked plugin stores stalled while MCP kept growing.
