Generative artificial intelligence (genAI) has been the dominant force for AI innovation, helping organizations work faster and smarter, with heightened creativity. The next wave of agentic AI raises the stakes, with the promise of autonomous, multistep workflows and independent decision-making. Yet organizations must strike the right balance between automation and accountability to capitalize on new work patterns at enterprise scale.
A natural evolution of AI, agentic AI has gained traction this last year as a means of advancing operational efficiencies, trimming costs, and removing friction from customer and employee experiences.
But as genAI use cases proliferate, enterprises have challenges in integrating with existing systems and tools and introducing autonomous action. In fact, despite upwards of $30 billion poured into genAI investments, 95% of organizations say they have yet to see any measurable profit-and-loss value, according to recent MIT report. The disconnect has led to rising interest in combining AI technologies to transform complex workflows and achieve desired business outcomes.
“Fully autonomous and LLM [large language model]-only-based AI agents fall short, because for the enterprise, you need more than just autonomy,” said Marinela Profi, global AI and genAI market strategy lead at SAS, in a recent Foundry webinar “To achieve that decisioning component, we are starting to combine LLMs with tools, memory, and probabilistic components like traditional AI.”
Three pillars of accountability
Organizations are embracing AI systems’ ability to provide feedback and recommendations, but they are not yet comfortable with handing the systems full autonomy to make decisions and initiate actions without some level of human oversight.
“Autonomy is great, but too much autonomy — especially in enterprise settings without oversight — can lead to unintended decisions, compliance issues, value violations, and brand damage,” Profi said. “Autonomy must be balanced with accountability, which means enterprises must know why an agent made a decision.”
Before identifying or deploying agentic AI use cases, organizations need to establish mechanisms that align with three tenets of accountability:
Explanation of why a particular decision is made
Proper governance and traceability
Human intervention for audits or overrides as needed
Human-in-the-loop is also a critical factor for designing agentic AI applications. When application designers are automating a handful of tasks, system logs are often enough to explain any variances or corrections. But as complexity rises, human interaction is an essential part of workflow design, explains Eduardo Kassner, chief data and AI officer for the high-tech sector at Microsoft. “You’re doing it for quality, but what you really are doing is increasing usability because people trust the system more,” Kassner says.
Another factor to consider is the build-versus-buy equation. Vendors are incorporating agents into their software, and many are offering prebuilt AI agents to simplify and streamline deployment. Although these off-the-shelf tools can jump-start implementation, some custom development is necessary, given the specificity of tasks; the complexity of data management; and security, compliance, and sovereignty requirements, Kassner says.
As organizations move forward with agentic AI, the following criteria should be considered to ensure success:
Reliability and accuracy
Privacy
Security, compliance, and sovereignty requirements
Performance benchmarks
Cost management
Data access, governance, and management will be an ongoing challenge — and if done right, markers for success.
“The key takeaway is: Don’t just automate or generate,” Profi said. “Orchestrate decisions with intelligence and trust. That is the real power and promise of agentic AI.”
To learn more, watch this webinar here.