Every few years, CIOs face a version of the same board question: What are we doing about this new technology? Today, the answer is expected to be AI. The pressure is real. The competitive environment is real. The board’s desire to see progress is legitimate, and I don’t dismiss any of it.
What’s worth examining is how that pressure has been absorbed. In many organizations, the response to board urgency has become a form of performance. Pilots accumulate. Vendor relationships multiply. Progress updates circulate. From a distance, this looks like an organization investing seriously in AI. On closer inspection, almost nothing has changed in how the business actually operates. The infrastructure decisions that AI depends on, the workflow redesign and the data readiness work remain undone.
In the last year, I’ve sat in multiple board prep sessions where the AI slide had 15 active pilots on it. Three described as promising. One on hold pending data access. None tied to a measurable business outcome. The update takes ten minutes and produces two follow-up questions. The CIO leaves with three new vendor introductions forwarded from the CEO’s office.
This is what I’ve started calling AI strategy theater. It satisfies the board question without answering it.
When pilots become the portfolio
A pilot is supposed to answer one question: Can this technology do this specific thing well enough to justify the investment to scale it? That’s a narrow mandate. It is time-boxed, scoped to a defined use case and designed to produce a binary decision.
The model operating in most organizations right now is a significant departure from that. When board pressure is high and timelines are compressed, the path of least resistance is to start something. Identify a use case, engage a vendor, stand up a proof of concept and report back. This produces visible activity. It satisfies the governance question for the next quarter. And it defers the genuinely hard work, the workflow integration, the data infrastructure and the change management, to a future that consistently fails to arrive.
McKinsey’s 2025 State of AI research found that while 88% of companies are now using AI in at least one function, only 32% are in the scaling phase. The gap between experimentation and value creation is wide and most organizations are sitting in it. The primary reason McKinsey identifies is that workflows have not been redesigned. The AI exists. The business process around it hasn’t changed.
Disconnected pilots don’t compound. They don’t build on each other. They don’t create the data infrastructure or integration architecture that scalable AI requires. What they create is a portfolio that demands maintenance without generating returns, and a narrative of AI investment that the underlying results don’t support.
The vendor dynamic accelerates this further. I’ve watched it play out enough times to recognize the pattern. Enterprise AI vendors have every incentive to help organizations launch new pilots. Sales cycles are short. Proofs of concept produce impressive results in bounded environments. The vendor’s success metric is the signed agreement and the reference customer. Whether the pilot integrates with enterprise architecture, whether it performs when production data replaces demo data, those are the customer’s problems. Most contracts are structured accordingly.
The governance gap that’s costing CIOs credibility
Urgency creates a second problem that gets far less attention: ungoverned AI decisions spread fast. When the board mandate to move on AI reaches business units, they act on their own timelines and with their own vendor relationships. Finance signs a tool agreement that hasn’t cleared IT architecture. Operations runs an automation pilot that touches production data. Marketing experiments with customer information that hasn’t been reviewed for compliance.
This is shadow IT at AI speed. The procurement and architecture review processes that would have slowed a significant software investment don’t apply to tools that can be deployed in an afternoon and produce impressive-looking outputs within days. By the time I understand what’s actually running across the organization, the business has already formed opinions about AI efficacy, based on tools that were never designed for enterprise scale.
The consequences accumulate quietly. Duplicate data pipelines with no clear ownership. Integration conflicts that don’t surface until something fails in production. Compliance exposures that emerge through an incident rather than through proactive reporting. And a business that has formed conclusions about AI’s viability based on tools that were never architected for the complexity they’re being asked to handle.
Credibility erodes through accumulation. Each missed outcome, each governance failure discovered late, each business unit that stops looping in IT adds to a pattern the board eventually names. Multiple AI investments that produce no measurable outcome. A governance failure discovered externally. A business unit that has quietly stopped involving IT because the engagement process felt too slow for the pace they were being asked to maintain. CIO.com’s coverage of the AI reset is tracking exactly this inflection: the organizations separating from the pack are the ones where the CIO has claimed clear ownership of the transition from experimentation to value and is governing that transition actively.
The CIOs I’ve seen navigate this well treat governance as an enabling function, not a checkpoint. The question is not whether a team got approval. It’s whether what they’re building connects to a workflow the business depends on, whether the data is ready to support it and whether there’s a measurement framework in place before the first model runs.
What disciplined execution actually requires
Disciplined AI execution is selective and deliberate about where effort concentrates.
The organizations I’ve seen successfully move AI from proof of concept to production share one characteristic: they made explicit, documented decisions about where to invest and they held that line when the pressure to add more pilots was high. This means maintaining a short list of initiatives that meet a defined threshold before any work begins. The workflow is well understood and owned by a business leader with change management authority. The data is accessible, clean and governed. There is a measurement framework in place that defines success before deployment, not after.
If a proposed initiative cannot meet that threshold, it does not get started. This is harder than it sounds when the board is asking for progress and a vendor is offering favorable terms on a pilot. Saying no to visible activity in favor of invisible foundations requires real credibility. It takes time to build and is easy to spend. The pressure to show movement is constant. The value of restraint is invisible until it isn’t.
Building internal capability is the other dimension that separates CIOs who are building real AI programs from those who are managing AI theater. Tools will keep proliferating. That’s not going to change. The strategic question is whether the organization is developing genuine capability to evaluate, integrate and govern AI at scale, or whether it remains permanently dependent on vendor roadmaps and vendor support structures. The former builds compound organizational advantage over time. The latter builds a technology portfolio that looks like capability from the outside and functions like dependency from the inside.
There is a version of AI leadership that reads well in board presentations and produces almost nothing of operational value. Pilots run. Progress reports circulate. The real question, “what is this actually producing for the business?”, goes unasked because asking it would complicate the narrative and require a harder conversation about what the last twelve months actually accomplished.
AI leadership gets measured by one thing in the end: how many pilots survived long enough to change how the business actually operates. Most of what’s being built right now won’t.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?