What most people thought was going to be another year of agentic AI is quickly turning into a more practical focus on simultaneously dealing with probabilistic (AI/ML-driven) and deterministic (traditional rule-based) code. Not a portfolio of both, but a growing number of hybrid applications that need to carefully and skillfully integrate the best of both guessing and knowing.
Many CIOs are no longer dealing with pilots and prototypes focused on specific off-the-shelf AI apps or custom agentic apps built solely within agent builder platforms. They’re now dealing with new application development requirements that need to combine both AI and traditional code.
These applications aren’t apps with AI bolted on, but new ones designed from the ground up where CIOs are quickly finding the messy middle, and having to decide where to draw boundaries and organize their teams.
Here are four recommendations for CIOs when deciding how best to integrate agentic, probabilistic, and traditional, deterministic code, particularly within software development projects that require careful integration of the two.
Establish boundaries and guardrails
The first step is to understand where each technology works best and to develop architectural guidelines and best practices for development and integration teams.
Quais Taraki, CTO at AI and data company EDB, recommends using deterministic code for the authoritative rules of your business, and probabilistic agents for the messy ambiguity of human intent. “The key is a dual-representation architecture where agents suggest, but traditional logic guards the system of record,” he says. “By co-locating these in a single platform, you eliminate the integration tax that typically comes with bolting AI onto existing systems, while maintaining absolute sovereignty over your data and logic.”
Then there’s Michael Fauscette, chief analyst at Arion Research LLC, who says the key decision framework for CIOs is to use deterministic code wherever outcomes must be predictable, auditable, and repeatable, and reserve agentic and probabilistic approaches for tasks that require reasoning, judgment, or handling ambiguity at scale. “In practice, that means letting agents handle the messy middle of workflows, like interpretation, summarization, and decision support,” he says, “while traditional code owns data validation, transaction processing, compliance logic, and structured output generation.”
However, Sangeet Paul Choudary, a C-level advisor on AI strategy and author, believes it really depends on the tolerance for failure versus the upside of innovation. “Agents can help come up with novel solutions to problems coders wouldn’t have thought through, so where that’s valuable, I’d design with agents at the core, and code as checks and balances,” he says. “In scenarios with low tolerance for failure, though, I’d flip it.”
If you’re working on the agentic side first, as part of a new software development project, it’s also important to optimize your agentic code and outputs first. You generally want to get this as accurate and repeatable as possible before deciding when and where to bring in the guardrails of traditional code. As an example, poor prompting or less than optimal LLMs for a specific use case, can shift the boundaries and might even make you under-utilize the power of your agents in a search for the safety of traditional code.
Organize for new hybrid teams
These new hybrid applications require teams with mixed skillsets as well. Taraki recommends CIOs think of agents as highly capable employees inside your organization. “Like any employee with significant access and autonomy, they come with a large blast radius and can have a profound impact on your business, for better or worse,” he says. “Success requires collapsing the silos between AI and traditional dev teams to ensure that orchestration and observability are treated as critical infrastructure.”
Fauscette recommends CIOs rethink team composition to include bridge roles — engineers who understand both traditional software architecture and agentic design patterns — because siloed AI and engineering teams create integration debt that compounds quickly.
According to Choudary, it’s important to focus on less reactive QA, and more proactive checks in the development and tooling environment with agents working alongside coders.
Overall, the handoffs and intersections between agentic and traditional code aren’t always as simple as an API call and structured output. It’s therefore important to think about not only the macro workflows between humans and AI, but also the numerous interfaces between probabilistic and deterministic code. Just like the handoffs between humans and machines, we also need well-positioned ones between AI and traditional code, and engineers who understand the tradeoffs.
Prepare for time well spent on governance and cost modeling
While the software development side may be accelerated with hybrid applications, the time and cost savings will likely need to be reallocated to careful upstream software design and architecture, as well as downstream testing, monitoring, and cost modelling.
Fauscette says that on the governance and TCO side, hybrid systems introduce new complexity in testing, monitoring, and cost modeling since probabilistic components have variable execution paths and token-based cost structures that don’t fit neatly into traditional capacity planning or QA frameworks.
In terms of cost modeling, while inference costs may necessitate new business rules to set usage boundaries for end users, Taraki says that, ultimately, the TCO of the agentic era isn’t just about inference costs but the operational rigor required to manage non-deterministic systems at scale.
Recognize how multi-agent workflows will further blur the lines
As if the new design considerations and organizational requirements for building hybrid systems with agentic and traditional code weren’t complex enough, we’re also dealing with a moving target as agents evolve.
Choudary adds that the center of gravity inside hybrids keeps shifting toward agents year by year. “We started with agents working on top of legacy code,” he says. “Now we’re increasingly seeing innovation-demanding systems designing around agentic capabilities, and using code for performance and risk management.”
Fauscette also recommends choosing a multi-agent workflow when the task is end-to-end cognitive work like research, analysis, and planning, and a hybrid AI and traditional approach when you need precise control over outputs, regulatory compliance, or integration with existing systems of record. “Looking ahead, the line between these two patterns will blur as agentic frameworks mature and offer better native support for deterministic checkpoints, structured outputs, and human-in-the-loop controls, making hybrid by default the standard architecture pattern within the next year to 18 months,” he says.
Taraki’s advice is to build for graceful degradation by ensuring every agentic step has a deterministic fallback, so your platform stays resilient and available even when models fail. “The future of the agentic will look less like agentic glue and more like a sovereign, governed platform with SLAs, auditability, and standardized patterns for retrieval, tool use, and safety,” he adds. “Our research shows that the 13% of global enterprises prioritizing sovereignty over their data and AI are already seeing five times the ROI, and running twice as many use cases in mainstream production as their peers.”