Skip to main content

MCP is Dead

·1479 words·7 mins

There are now thousands of MCP servers. Salesforce has one. GitHub has one. Atlassian, Notion, Linear – all of them are scrambling to ship MCP endpoints as fast as they can. The trade press is calling it a revolution in how AI connects to the world.

It is not a revolution. It is a last stand.

MCP is the incumbent software industry’s last attempt to stay in the loop. It won’t work.

What Software Is, Actually
#

Before you can understand why MCP is dying, you need to understand what software fundamentally is.

Software is pre-baked decisions about data transformations. That is the whole thing. A CRM like Salesforce is decades of decisions about how sales data should be structured, filtered, and surfaced. A project tracker like Linear is a set of decisions about how work should be represented and moved through states. The UI is just a delivery mechanism. The API is just a narrower delivery mechanism. The MCP server is just a narrower-still delivery mechanism.

The value was never in the interface. The value was in the accumulated decision-making encoded inside. That is what people bought. The software vendors just happened to be the only ones who could package and deliver it.

That monopoly on packaging is ending.

What MCP Is Actually Doing
#

Ask yourself why every major software vendor is sprinting to build MCP servers right now.

It is not altruism. It is not because MCP is technically superior to REST or GraphQL. It is because these companies are staring at a world where AI agents are doing the work their software used to orchestrate – and they need to stay in the loop.

MCP gives them that. An agent that needs to update a Salesforce record has to call the Salesforce MCP server. The data still lives in Salesforce. The billing relationship still exists. The vendor stays relevant.

This is the Kodak moment in slow motion. Kodak did not fail because they missed digital – they actually invented the digital camera. They failed because they could not let go of film, because film was where the money was. MCP servers are the film. The vendors know the camera changed. They just need the film to last a little longer.

MCP servers are the film. The vendors know the camera changed. They just need the film to last a little longer.

The protocol itself is not even particularly good. It has no enforced authentication – security is recommended, not required. Early implementations leaked session IDs in URL query strings. Stateful sessions fight load balancers. There is no standard risk categorization for tools, no governance model, no clear answer to “who is responsible when a tool poisons the agent’s context with malicious instructions.” These are not implementation bugs. They are design choices made by a consortium of companies optimizing for adoption speed, not architectural integrity.

Skills Are the Different Model
#

Here is a concrete example of what the alternative looks like.

Say you are running vendor due diligence – evaluating a supplier before a major contract. The job involves pulling financial filings, checking litigation history, cross-referencing industry databases, assessing news sentiment, and synthesizing a risk picture. It is not one data source. It is twelve, and none of them were designed to work together.

Wire an agent to the MCP servers that exist: maybe a legal database vendor, maybe a financial data provider. The agent now has five tools. The job requires fifty distinct lookups and judgments. The MCP servers expose what their product managers decided was worth exposing. The rest – the SEC EDGAR full-text search, the county court records portal, the trade publication archive, the niche supplier database that only has a REST API and no MCP server at all – is simply not reachable through the managed interface.

So you are left with an agent that can do the easy parts and has no path to the hard parts.

Now give the same agent a skill instead. A document encoding what matters in vendor due diligence and why: which signals actually predict supplier failure, how to weight conflicting indicators, what a clean cap table looks like versus a messy one, which court record patterns are noise versus flags. Then give it primitive tools – HTTP requests, a browser, a shell – and let it work.

That agent reads the EDGAR API docs and constructs the right query. It figures out the county court records portal’s search parameters by inspecting the page. It hits the niche supplier database’s REST endpoints directly. It does not wait for any of those vendors to ship an MCP server. It does not conform its approach to whatever tool schema someone else designed. It finds the best programmatic path to each piece of data, specific to this job, and assembles the picture.

The skill is not an API surface. It is domain expertise made legible to a reasoning system. The agent generates the implementation at runtime, adapted to the actual task, not the average one.

This is the pattern that replaces MCP – and it exposes something software vendors are not ready to reckon with. When an agent can navigate your API documentation directly, your MCP server is not a feature. It is a constraint. You have pre-decided what the agent is allowed to do, in what order, through what interface. The agent with a skill does not need your permission structure. It needs your data.

When an agent can navigate your API documentation directly, your MCP server is not a feature. It is a constraint.

There is a second trap vendors are falling into here. Some of them see this coming and are trying to protect their position by bifurcating: put some capabilities in the API or MCP server, keep the rest behind bot-protected web UIs. Make the agent go through the human interface for anything sensitive or strategically important. This is a slower version of the same losing strategy. Agents are increasingly good at navigating web interfaces when they have to. And more importantly, the capabilities you are withholding behind a protected UI are not actually your moat. They are just friction. The moat was always the data and the domain model behind the interface – and an agent that understands the domain does not need you to curate its access path.

This is what Karpathy was pointing at when he said code is now “free, ephemeral, malleable, discardable after single use.” When code generation is essentially free, pre-packaging that code into server endpoints loses most of its value.

Real-Time Capability Building
#

The LLM framework explosion of 2023-2024 – LangChain, LlamaIndex, all of it – was built for a specific moment. LLMs were less capable, needed heavy scaffolding, could not reliably use tools or follow complex instructions. So developers pre-built the scaffolding and packaged it.

That moment is over.

Better models with native tool calling, expanded context windows, and improved reasoning have made most of those abstractions unnecessary. When an agent can generate a custom pipeline on demand, why would it use a pre-built one? The pre-built one is optimized for the average case. The generated one is optimized for the actual case, right now, with the actual data.

MCP is the same bet, one abstraction layer up. Instead of “here is a pre-built pipeline,” it is “here is a pre-built interface.” But if an agent can synthesize the interface from documentation and context, you have the same problem. The pre-built thing becomes the inferior option.

The vendors building MCP servers are not wrong that agents need to access their data. They are wrong that the way to make that happen is to design the access layer. Agents with good tools and good skills will design their own access layer. They will do it better, because they will do it for the specific task at hand rather than for the generalized case some API designer imagined.

What Survives
#

Data survives. Domain models survive. The accumulated decisions encoded in fifteen years of Salesforce configuration – the custom fields, the workflow rules, the permission structures that map to actual business logic – that is genuinely valuable. Nobody is rebuilding that from scratch.

What does not survive is the software vendor’s position as the mandatory intermediary between that data and the agents who need it.

The companies that understand this will start thinking about their data as the product and their software as scaffolding that happens to hold it. The companies that do not will keep building MCP servers, watching adoption metrics climb, and wonder why revenue is falling.

MCP is not a bridge to the future. It is a toll booth on a road that is being paved around.

The agents are not coming. They are here. The question is whether the software sitting in front of your data is helping them or just charging rent.