In March 2023, automation company Zapier declared an internal code red, urging teams to sprint toward AI experimentation. Prototypes bloomed overnight. Workflows were rebuilt. “The energy was incredible,” says Brandon Sammut, the company’s chief people and AI transformation officer. “Teams were building AI-powered workflows.”
Yet few of those automations made it to production. While the models worked well in isolation, they couldn’t survive inside the web of Zapier’s existing tools, data sources, approval flows, and human workflows.
“That’s when it really clicked for me,” Sammut adds. “The hard part of AI isn’t the AI itself. It’s the orchestration around it.”
The demo-to-production gap, as he calls it, is only one of the reasons AI initiatives go sideways. Fragmented data, weak governance, and a disconnect between leaders and frontline teams often compound the problem. The numbers reflect this as well. MIT’s State of AI in Business 2025 report estimated that about 95% of gen AI pilots fail to produce measurable business impact.
It’s no secret that AI adoption is messy and far more complicated than some estimates suggest. So C-level executives have to recognize when an AI initiative is drifting off course, and whether it’s worth fixing or not. With the right strategy, though, a struggling experiment can turn into a project that serves the business.
“AI shouldn’t be sustained on optimism alone,” says Scott Likens, the US and global chief AI engineering officer at PwC. “It needs observable, repeatable outcomes tied to business value.”
Frist signs of failure
Signs of trouble often surface early. A common one is when a project seems to be moving forward, but the date for putting it into real use keeps getting pushed back. “’A few more weeks’ turns into ‘We need to sort out the integration,’ which turns into, ‘We’re waiting on a security review,’” Sammut says. “Each delay feels reasonable on its own, but taken together, it’s a pattern.”
Another sign that a project is going south is the gap between leaders and practitioners. Executives often feel they have a clear view of the project, while the engineers and operators doing the work say much of the day-to-day friction goes unseen. At some point, Sammut believed projects were on track because he was hearing about milestones and launch dates. But a closer look showed that teams were stuck on integration backlogs and policy delays — problems that rarely surfaced during executive briefings.
In other cases, projects fail because they were never considered a priority by leadership. It happened to Eli Vovsha, manager of data science at cybersecurity software provider Fortra. Around 2022, he and a colleague set out to transform a hackathon prototype into a production-ready system powered by reinforcement learning.
Vovsha and his colleague delivered a POC, but from that point on, things stalled. “After various delays, the project was simply abandoned,” he says.
Looking back, he adds, the initiative faltered because leadership never treated it as a real priority. “This lack of genuine interest meant our engineer colleague had to repeatedly postpone work as higher priority items came his way,” he says.
In retrospect, he doesn’t question the technical decisions he made. The lesson was tactical. “As a project manager, I’d be very careful to ensure the initiative is aligned with the vision and roadmaps of the appropriate product managers, and there’s a built-in lever to guarantee engineering support,” he says.
Likens says he’s observed many AI initiatives lose momentum when they drift from clear business goals. When it comes to AI, he says, flexibility matters more than ownership, and refocusing on specific business problems supported by stronger data and governance can help teams move faster, and deliver results that last.
But not all warning signs are easy to spot. Some are subtle and easy to miss like the project fades from agendas, people stop talking about it, and updates grow vague. The excitement that once surrounded it gives way to polite silence. It’s why executives need to pay attention to what’s not being said. And they should also pay attention to users if the project has already been deployed. “The main thing to watch for is the absence of positive user feedback,” says Australian software engineer Sean Goedecke, who writes about AI and large-company dynamics on his website. “The most successful AI products have immediately clicked with users.”
AI projects that fail, he adds, usually do so because they’re driven by the urge to do something with AI rather than to solve a real problem for users.
Rescuing failed initiatives
Some AI projects can be saved, but recovery requires a shift in mindset. These initiatives shouldn’t be treated as technical experiments, though. Leaders need to rather focus on how the work will deliver real value to the business. That means integrating these projects into real workflows, assigning clear ownership, and setting measurable results.
“The first step is shifting from model performance metrics to workflow performance metrics,” says Likens. “Ask is the business outcome improving, not if the model is accurate,” he says.
To have any chance of rescuing a failed AI initiative, organizations need to gain visibility into what’s actually happening, not what leadership thinks is happening. Then it’s important to figure out where the orchestration gaps are and what’s actually blocking progress.
“We stopped looking at the model’s performance in isolation and instead looked at the full workflow,” Sammut says. That meant asking where the manual handoffs are, where were people copying results from one system into another, and where does the process break down between what the AI produces and what teams actually need to act on.
Once those questions are answered, companies can take concrete steps. Sammut and his colleagues suggest assigning clear ownership. “Someone has to be accountable for production outcomes, not just the experiment,” he says.
There was further investment in earlier integration planning, pressing teams to ask hard questions about systems, data flows, and workflow dependencies before deadlines loomed. And they built governance into the process from the start, rather than letting compliance and security reviews surface late as unexpected obstacles. Plus, they standardized their tooling so teams worked on the same platforms, and prioritized training to help employees learn how to use the new technology and adapt workflows.
“When teams share both a common foundation and a way to learn from each other, every new initiative builds on the last instead of starting from scratch,” Sammut says. “That’s how experimentation becomes repeatable capability.”
He also insists organizations need to use a three-pillar framework when developing AI projects, in that every initiative should deliver measurable improvements in efficiency, quality, and employee experience. “Efficiency alone leads to job displacement, fears, and resistance,” he says. “Quality alone doesn’t justify the investment, and if the people doing the work hate the new process, it’s not going to stick.”
Applying that test earlier, he adds, would’ve helped his team shut down weaker efforts sooner, and focus time and investment on the projects that showed real promise.
When to shut down an AI project
Sometimes organizations keep troubled projects alive long after their prospects have dimmed. Ending an AI initiative can be painful, especially after months of work and public backing, but experienced executives say knowing when to stop is part of the game. Shutting down a struggling project frees up time, money, and talent for ideas that can deliver real value.
That discipline is even more important in AI, where the technology is moving fast. “Organizations should be especially willing to shut down AI initiatives,” says Goedecke. “The landscape is changing so quickly that previously-impossible projects become possible every month, so a 12-month-old AI initiative should probably be revisited purely on the basis of age.”
Sammut has seen this pattern, too. Teams cling to AI pilots out of sunk-cost thinking, continuing to invest time and effort just because so much has already been spent and no one wants to admit it might not work. “That’s a trap,” he says. “It’s better to redirect that energy and budget toward a higher-impact opportunity than to keep pouring resources into something that’s not going to move the needle.”
He recommends shutting down initiatives that prove to be less impactful than expected, or when costs outweigh the value they could deliver, even in a best-case scenario.
“If a pilot has been almost ready for production for more than two cycles without a clear, specific blocker being resolved, it’s probably time for a direct conversation about whether to redesign or stop,” Sammut says.
Stopping a project doesn’t have to mean wasting the work behind it, though. Sometimes, moving the team to a different initiative can accelerate progress rather than stall it. “The meta-lesson here is that speed of learning matters more than any single initiative,” he says. “If shutting down one initiative frees your team to learn faster on a better one, that’s not failure. That’s good leadership.”
Dealing with doubt
AI initiatives carry enormous expectations. They’re launched with bold promises, ambitious timelines, and the hope of quick transformation. “When they don’t deliver, it can feel like a public setback, especially with how visible AI has become,” Likens says.
Often, when they fall short, the pressure can feel deeply personal. “There’s a real moment of doubt,” says Sammut. “You wonder if you pushed too hard, or not hard enough. You wonder if people are losing confidence in the broader AI strategy because of one initiative that stalled.”
The teams building these AI projects feel it too. Engineers and product managers can become more cautious, and less willing to take chances or propose bold ideas. Over time, that hesitation can slow innovation far more than any single failed project.
“There’s a real risk that people internalize the message that experimentation is risky or AI is overhyped,” Sammut adds. “Both of those conclusions are wrong, but they’re natural responses when things don’t work out.”
Transparency is important in these cases. “When something doesn’t work, we talk about why in a way that treats the experience as an input, not a verdict of blame,” Sammut says.
In a field moving as quickly as AI, that openness can be a competitive advantage. “We believe in learning in public and sharing what we learn, whether it’s polished or not,” he adds.