The AI revolution: Getting culture right for AI success

With effective AI implementations likely to separate winning organizations from the also-rans, many IT leaders are taking various approaches to creating workplace cultures that empower nearly all employees to make productive, innovative use of AI.

Underlying such strategies is a focus on training, as well as encouragement to experiment with and implement AI in sync with IT guidance and governance to control the inherent risks of AI.

“Right now, we are investing in employees. The more you train them, the more AI they use, and the more ROI comes in,” says Vagesh Dave, global vice president and CIO of McDermott International, a provider of oil, gas, and renewable energy technologies.

Extending training to hands-on opportunities is vital, but also requires a balance that doesn’t feel forced on employees, says Melissa Swift, founder and CEO of Anthrome Insight, a human capital management advisory firm.

“People learn technology by playing with it,” Swift says. “Top-down doesn’t work that well. An AI-friendly culture is neither all guardrails, nor anarchic, but finds a sweet spot.”

Yogaraj Jayaprakasam is one IT leader is seeking such a balance. “Our intent is to reach all employees, but we are not forcing them,” says the chief technology and digital officer at Deluxe, a financial services technology company. “And we definitely don’t do anarchy, either. We made AI part of [employees’] goals and strategies.”

Overcoming the fear factor

When employees are empowered, success depends on the workers themselves ¾ their embrace of AI, their creativity, and their initiative. To channel this energy, IT leaders need to help workers across the enterprise overcome an aversion to the new and the fear of one day being replaced by AI.

“Technology is only a small component of the issue. Getting people to engage and open their minds — the opportunity is just massive. Some people are open and excited, while some are afraid,” says Ben Ellencweig, senior partner at McKinsey & Co.

Why the fear? For some employees, the rise of AI might remind them of the outsourcing wave that crested a decade or more ago. No one wants to be in a position of having to train their replacement, especially if it’s an AI bot.

“When people are distrustful and fearful they will lose their job, it’s not good. Really good grassroots adoption will take place where there is a culture of trust. If humans trust each other, they can go through testing and get results faster,” says Anthrome’s Swift.  

Corporate leaders, meanwhile, are fearful that unleashing AI could create regulatory risks and legal liabilities they can’t control, whether due to hallucinations, exposed personally identifiable information (PII), or biased results. For these reasons, an effective governance council is critical, as is a comprehensive AI governance, risk, and compliance (GRC) framework. Such councils usually consist of heads of IT, HR, legal, and possibly others.

“Legal and security officers check out projects. If there are compliance issues, that stops a project,” says McDermott’s Dave. His council vets AI projects to implement only those that deliver significant benefits and that can be scaled across the organization.

These fears might underly a McKinsey report that uncovers finger-pointing between management and employees when it comes to AI adoption. According to the report, while management thinks employees are not ready for full gen AI adoption, employees say they are using the technology extensively already while being held back by management.

For example, three times as many employees are using gen AI for a third or more of their work than their leaders imagine, according to the report. Meanwhile, C-suite executives are 2.4 times more likely to cite employee readiness as a barrier for gen AI adoption, versus their own issues with leadership alignment. And 48% of employees rank training as the most important factor in gen AI adoption, but nearly half feel they are receiving moderate or less support from management.

Addressing skepticism and hesitancy

Skepticism is another barrier to AI experimentation and adoption. And one reason AI skepticism has taken hold among many workers is the prevalence of “AI slop,” or AI implementations that appear to accomplish a useful task, but don’t improve accuracy or efficiency.

“I’m a huge fan of professional development,” says Michael Schrage, a fellow with the MIT Initiative on the Economy. In the insurance industry, he explains, “I’m happy to have claims adjusters and actuaries do promptathons.” But, he adds, “You do not want your insurance adjuster doing AI slop and thinking it has made them more efficient. There has to be oversight and guardrails. But an AI playground or sandbox — absolutely,” Schrage adds.

Another reason for hesitancy in launching initiatives or embracing them might be the lack of provable ROI from gen AI implementations, according to McKinsey’s Ellencweig.

“80% of companies have deployed gen AI in some form or shape, and 80% of companies report no material contribution to their bottom line from those implementations,” he says.  

Nilesh Thakker, president of Zinnov, a technology management consultancy, says that 92% of companies are piloting AI, but only 70% can tell what the ROI is and 55% have no structured governance. “It’s a problem we take very seriously,” he says.

But measuring AI ROI with a strict yardstick may well miss the point. Investing for corporatewide enablement of AI is a strategy based on a belief in future transformation, not merely turning nickels and dimes in the present. “We’re not focusing on savings, because we are focused on growth,” asserts Dimitris Bountolos, chief information and innovation officer at Ferrovial, a global engineering and construction firm.

One size of AI does not fit all

Deluxe’s Jayaprakasam targets AI education differently depending on the employee’s career status.

“Entry-level, midcareer, and C-suite — every level has different requirements,” he says. “A CIO should know how to build a strategic view on where AI creates value. For entry-level, it’s about AI fluency and critical human skills. For midcareer, it’s AI orchestration and change management beyond their existing domain skills.”

Jayaprakasam also groups AI strategy into three pillars: technology, business, and customer, which vary by risk. For technology, AI can be used to write code, greatly speeding the development of software. For that purpose, AI introduces minimal risk. Using AI to speed workflows and take over report writing and repetitive tasks for internal business purposes, also drudge work, increases efficiency with little risk added.

Adding AI capabilities to products and services for customers can improve quality while generating additional revenue. However, risk can be increased due to the need to assure customer privacy and stay consistent with the terms of master service agreements (MSAs), says Jayaprakasam.

At Deluxe, when business units come up with AI use cases, they are presented to a central AI governance committee consisting of Jayaprakasam with representatives from HR, finance, legal, and the business unit, where they are graded on productivity gains. This process generates up to 30 ongoing AI projects at any given time, he says.

At Ferrovial, Bountolos has followed a similar approach: Encourage widespread AI experimentation, cull the results, and propagate the most successful ones.

“In January 2023, we created our first platform and let any employee use LLMs. Now we have training, change management, ideation, implementation, and adoption at scale,” he says. “Every time one agent is created, we look at how it can be used by the entire company. Hundreds of agents have been developed for specific tasks and purposes and more than 80% of them are shared.”

In one example, an AI agent takes over the task of planning highway lane closures to allow construction or repair. With many variables to consider, such as construction blueprints, daily traffic patterns, and bureaucratic approval, a lane closure scheme that previously took hours for humans to develop can now be created in a few minutes, according to Bountolos.

And while Bountolos focuses on growth, Jayaprakasam measures ROI. So far, he says AI initiatives have added $10 million to Deluxe’s profit margin. Half of that windfall came from software economies — increasing the efficiency of software development and decommissioning an out-of-date mainframe system.

“AI can convert mainframe code to modern code and automation,” he says.

Significant additional savings came from the use of AI in Microsoft Dynamics 365 accounts receivable thanks to a customer-facing gen AI chatbot called Deluxe Assist, says the chief technology and digital officer.   

Let them have a copilot

Key to any of these initiatives is the widespread use of AI tools.

“I believe [employees] have to use AI to know what AI can do. One of our jobs is educating employees ¾ all employees ¾ not just IT,” says McDermott’s Dave.

As for Ferrovial’s Bountolos, he is scattering AI tools far and wide. “100% of employees have access to [Microsoft] Copilot and 90% are using it,” he says.

Liberty Mutual is taking a similar approach. The insurer has given 74% of its workforce access to enterprise gen AI tools, according to Melanie Foley, the company’s chief people, purpose, and brand officer.

“We’re intentionally making AI approachable and accessible, demystifying it through real, relevant use cases like drafting, summarization, and analysis, so employees quickly see value, build confidence, and trust the technology,” says Foley. 

While Zinnov advises clients on how to get the most out of AI, the company is also putting AI to work internally.

“Most of our strongest AI initiatives have come from within the organization. Through hackathons and internal innovation forums, teams have built AI-assisted research engines, proposal acceleration tools, benchmarking automation workflows, and internal knowledge copilots,” says Amita Goyal, managing partner at Zinnov. “These were not centrally assigned projects. Teams saw repetitive cognitive work and redesigned it.”

A fast-moving train

It’s no secret that the capabilities of various forms of AI are increasing rapidly. By some estimates, AI is doubling in power every few months. That makes the era of AI qualitatively different from other IT disruptions, such as software as a service (SaaS) or mobile computing, says Jayaprakasam.

“AI is a fast-moving train. Every time you implement it, it gets better and better. How to implement AI ¾ assuming it will be different every month and every day ¾ is a new way of working for all of us,” he says.  

It stands to reason that companies that harness this technology tsunami are likely to reap competitive advantage. For companies that have taken a cultural approach to AI, McKinsey’s Ellencweig sees the payback coming sooner rather than later.

“2026 will be the year of AI breakthrough, based on AI fluency,” he says.

Further, AI will become less a specific area of expertise than simply the way that business gets done.

Says Jayaprakasam, “We believe AI will be an ongoing way of changing your business model. Eventually AI gets embedded into everything you are doing.”