Why Model Context Protocol is suddenly on every executive agenda

Technology leaders are used to watching new standards emerge quietly and then disappear into the plumbing of enterprise IT. But Model Context Protocol (MCP) is following a different trajectory. Over the past year, it has moved from an obscure technical concept into the center of conversations about agentic AI, governance, and security risk, and it’s a shift that reflects not hype, but the practical reality of how AI systems are beginning to interact with enterprise environments.

During a Cyber Sessions interview I conducted last year with veteran security executive Andy Ellis, he predicted the inflection point before most executives had encountered the term.

“I think MCP is going to be massive at RSA,” he said at the time. “Instead of having an API tightly defined between a client and a server, you put an LLM on either end and let them negotiate what to exchange. It will revolutionize software development – and it’s going to make it really scary.”

That prediction now looks spot on. RSA Conference organizers tell me that many of their submissions for 2026 focused on the topic. For a protocol introduced in 2024, that level of attention signals rapid movement from theory to deployment. What is driving this surge is not simply curiosity among engineers. MCP is emerging as a connective infrastructure for enterprise AI, and that makes it an important emerging technology for CIOs to watch.

Integration friction meets AI acceleration

One of the most stubborn barriers to enterprise AI adoption has not been model performance but integration complexity. Organizations launched ambitious pilots only to discover that connecting AI to existing systems required time-consuming API work, brittle middleware, and specialized development skills. Ellis attributes MCP’s rapid adoption to its basic utility.

MCP lets you plug your existing stack of applications together without all of the annoying API integration work. It’s basically a universal connector, said Ellis.

“A lot of people started AI projects from scratch, trying to build complex systems out of thin air. MCP lets you plug your existing applications together instead.”

The protocol provides a standardized way for AI agents and applications to retrieve data and interact with enterprise tools. In a recent interview, RSAC researchers described it as “the USB-C of AI,” a connector designed to allow disparate systems to communicate without custom integration.

For CIOs facing sprawling application portfolios and pressure to deliver AI-enabled capabilities quickly, this changes the implementation equation. Integration shifts from bespoke engineering to configuration, existing systems become accessible without rebuilding them, and even non-engineers can connect data sources into AI workflows. In environments where modernization has produced complexity rather than cohesion, this alone explains why MCP is gaining traction.

MCP’s rise is inseparable from agentic AI

MCP’s sudden relevance coincides with the rise of agentic AI, which requires reliable mechanisms for retrieving data and acting within systems. The protocol addresses both needs by creating a standardized method for sharing data with large language models and a standardized way for those models to act on behalf of users.

This shift may be even more important than it appears on the surface. Earlier AI integrations often relied on system-level credentials or high-privilege service accounts, creating exposure risks and limiting accountability. MCP enables actions to be performed in the context of the current user, improving traceability but introducing new governance requirements.

As RSA researchers explained, “MCP solves two important aspects of agent AI and AI chatbots. It creates a standardized way to share data with an LLM, and it also creates a standard way of having an LLM act on behalf of the current user.”

This is what elevates MCP beyond technical convenience. By enabling LLMs to act on behalf of a human or organization, it shifts the core question from what an AI system can see to what it can do – moving MCP out of the engineering domain and into governance, identity, and risk management.

Governance is moving closer to the protocol layer

Security researchers are focusing heavily on the risk dimension of MCP adoption. RSA organizers tell me that if MCP-related conference submissions are categorized by emphasis, fewer than 4% fall primarily into the opportunity category. Developers and automation teams have already embraced the benefits; the security community is concentrating on the exposure.

The risks are not abstract. MCP tooling can be over-permissioned, untrusted MCP servers can enable data leakage or prompt injection, and malicious tool impersonation or authentication bypass scenarios can create pathways for compromise. One RSA session will demonstrate how an MCP vulnerability could enable remote code execution and full takeover of an Azure tenant.

The broader concern is structural. MCP integrations can be created by anyone experimenting with AI tooling, expanding the attack surface beyond enterprise-approved systems to an ecosystem of community-built connectors that may never undergo security review. RSA researchers warn that the relative lack of default security controls, combined with the speed of adoption, means organizations should anticipate destructive incidents that security teams may not see coming.

The adoption velocity problem

As with many AI developments, the speed of experimentation is outpacing the development of governance controls. Vendors are embedding MCP connectors into enterprise products to simplify AI access. Coding assistants already rely on the protocol extensively. Developers and so-called “vibe coders” are using it to connect systems and automate workflows with minimal friction.

At the same time, AI agents remain non-deterministic by design, and MCP tooling can grant them powerful operational capabilities. The combination of ease of integration and expansive permissions creates a risk profile that organizations have not previously managed at scale. For CIOs, this is a familiar tension. The operational value is clear, but unmanaged adoption introduces systemic exposure.

Practical use cases are already emerging

MCP has moved beyond proof-of-concept environments into operational workflows. Organizations are using it to collect data across systems for incident management, read support tickets and assign priority levels, move items into internal tracking systems, and interconnect security, logging, and file platforms. Coding assistants and AI-augmented task automation tools are increasingly dependent on it.

These use cases are compelling because they reduce context switching, eliminate manual data gathering, and allow AI to support workflows without deep integration projects. They also illustrate why governance must evolve in parallel with adoption.

Questions CIOs should be asking now

MCP is becoming a control layer for how AI systems interact with enterprise environments. That reality requires executive visibility and deliberate policy decisions.

Leaders should begin by understanding where MCP is already in use, particularly within developer tools and AI assistants. They should examine who has the authority to create integrations, how permissions are granted and constrained, and how MCP servers are authenticated and trusted. Governance policies may need to move closer to the protocol layer to ensure consistent enforcement across integrations.

Equally important is recognizing that MCP adoption may not arrive through a single enterprise initiative. It is likely to emerge through tools, vendors, and experimentation occurring simultaneously across the organization.

A structural shift, not a passing trend

Protocols rarely become executive concerns. MCP is becoming one because it sits at the intersection of AI execution, system integration, and enterprise governance. It enables AI to analyze enterprise data and to act within enterprise systems. That capability is powerful and transformative, and it also changes the risk equation.

The rise of MCP signals a shift in enterprise architecture from systems that expose data to systems that intelligent agents can operate. CIOs do not need to slow adoption to manage this shift, but they do need to ensure that identity controls, governance policies, and security guardrails evolve at the same pace.

Once agents can operate across enterprise systems, the central question is whether AI can access the environment and, if so, how safely and responsibly it can operate within it.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?