Most security initiatives fail before the first line of code is written. Not because the technology is wrong, but because the problem was framed poorly from the start. Leaders often move fast toward familiar answers, then wonder why progress stalls.
Last year, a global cybersecurity technology company brought me in to help run a Privileged Access Management proof of concept. On paper, it made sense. PAM was proven, defensible and easy to justify to cybersecurity leaders. The intent was legitimate and the urgency was real.
Once we looked closer, the real issue became obvious. Centralizing PAM had no internal support. Stakeholders were wary, resisted heavy controls and were unconvinced it would help them do their jobs. Pushing forward would have burned credibility and months of effort.
So we reframed the work. Not as privileged access enforcement, but as non-human identity visibility. We stayed with existing platforms, added a focused governance layer and prioritized understanding before control. Within weeks, leaders had a clearer picture of what was actually happening and where the real risk lived.
What broke down had nothing to do with tooling. The breakdown came from acting without organizational intelligence. The technology was ready. The organization was not. Until leaders can read that difference, even the best ideas will struggle to move.
This kind of misdiagnosis is becoming increasingly common, especially as security leaders are asked to decide faster, explain more and deliver results in environments that no longer behave the way they used to.
The information flood is breaking human judgment
The volume of information confronting security leaders has crossed a line. It is no longer challenging in a healthy way. It is relentless, and it is changing how decisions get made, often for the worse.
Sol Rashidi argues in her recent essay on staying sharp in the age of AI that “Today information is doubling every 8 hours.” That pace alone breaks old habits. Reports arrive faster than leaders can review them. Dashboards refresh constantly. Briefings stack up, each one technically correct and collectively exhausting.
The problem is not access to data. It is deciding what deserves attention right now. We see this clearly in areas like third-party risk management. Organizations have questionnaires, evidence repositories, policies and tools. None of those are scarce. What runs out first is judgment. Senior leaders and analysts become bottlenecks because every meaningful decision still lives in someone’s head, shaped by experience, context and risk tolerance.
As volume grows, confidence drops. Decisions slow down or default to the safest possible answer, even when it hurts the business. AI does not fix this on its own. Without a grounding in real organizational judgment, it simply produces more material to review.
This is how sensemaking breaks. Not from lack of information, but from the inability to apply human judgment under pressure and at scale.
Why the people–process–technology model no longer holds
The people process technology model starts to crack the moment work stops behaving neatly. I see this repeatedly. A security platform is selected. The process looks solid in the slides. Teams are trained and the rollout begins. Months later, adoption is uneven, exceptions stack up and the business finds ways around the controls.
The issue is not the tool. Technology no longer operates in isolation. It collides with incentives, deadlines, fear and internal politics. A control that makes sense on its own can fail quickly when it runs into revenue pressure or operational urgency. That is exactly what happened in the PAM POC example. The technology was sound. The organization was not ready to move with it.
What actually holds people, process and technology together is purpose. Clear intent focuses judgment. It makes tradeoffs explicit and ties security work to real outcomes. Without it, programs drift into defensive behavior and checkbox execution.
Just as leaders lose visibility into the system they are trying to manage, the system itself is changing shape.
AI agents are reshaping the enterprise at a structural level
When the system changes shape, leadership habits are usually the last thing to catch up. That is exactly what is happening as agents move from experiments into day-to-day operations. This is not a future scenario. It is already underway inside serious organizations.
McKinsey & Company now counts roughly 40,000 humans working alongside 25,000 AI agents. Bob Sternfels shared this on Harvard Business Review’s IdeaCast podcast. Inside the firm, agents handle research, synthesis, internal reviews and early drafts, while consultants remain responsible for judgment, client trust and final decisions. The point is not the headline numbers. It is that a century-old professional services firm is actively redesigning how work flows through the organization.
We see the same dynamic closer to security operations. TORQ uses agents to coordinate workflows in the SOC that once required constant human handoffs. Twine applies agent-based collaboration to help teams share context and decisions across complex environments. In both cases, agents are not acting independently. They are embedded into how workflows, decisions form and actions follow.
This matters because execution and coordination are no longer purely human activities. Work now happens across people, platforms, partners and machines at the same time. Old structures struggle to explain who owns what, how decisions were reached and where accountability lives.
This is not about replacing people. It is about redesigning how work gets done as humans learn to operate alongside agentic teammates, so judgment, accountability and trust are applied where they matter most as organizations become increasingly agentic.
Working alongside agentic teammates
The real opportunity with agents is not speed for its own sake. It is clarity. When used well, agents remove the friction that drags leaders into constant review, coordination and rework. They take on the repeatable cognitive tasks that exhaust teams and pull attention away from decisions that actually matter.
In innovative organizations, agents do not act alone. They work as teammates. They prepare context, compare options and bring patterns into view so leaders can decide with confidence. Humans stay responsible for judgment, accountability and trust. That division of labor is deliberate, and it is where value is created.
This is what agentic leadership looks like in practice. Leaders design how humans and agents work together. They decide which decisions stay human-owned and which inputs can be prepared by machines. They set boundaries, expectations and review points that keep governance intact while increasing throughput.
Augmentation is not a compromise. It is a control strategy. By protecting human judgment from overload, leaders create space for better decisions, faster alignment and teams that operate with purpose instead of fatigue.
Augmenting judgment in an agentic enterprise
The promise of agents is often framed around speed. That misses the point. Speaking of his December 2025 podcast episode, Eyes to See: AI, Leadership and the Courage to Move Faster Than Comfort, Jason Elrod, CISO, MultiCare Health System, argues, “The real value of digital twins and verified intelligence is clarity. When leaders can reason against trusted representations, judgment improves and learning loops get tighter without giving up responsibility.”
That framing matters. Forward-thinking organizations are not racing to remove humans from the loop. They are redesigning the loop. Agents prepare context, test assumptions and reflect prior decisions back to the team. Leaders decide. Accountability stays human. Trust stays intact.
This is where agentic leadership shows up. Not in legacy automation, but in deliberate design choices about how people and agents work together. What inputs can be prepared by machines? What decisions remain owned by leaders, where review and challenge still matter?
The end of the org chart: Leadership redefined
The traditional org chart was built for a simpler reality. It assumed that work was done by humans, that reporting lines explained accountability and that coordination followed predictable paths. For a long time, that model worked well enough to keep organizations moving.
Agentic enterprises operate by different rules. Work now includes non-human contributors that prepare analysis, coordinate actions and reason in parallel. Execution is distributed across teams, platforms, partners and agents at the same time. Decisions form faster than hierarchy can explain them. Accountability no longer maps cleanly to boxes and lines.
This is where leadership must evolve. Not by clinging to control structures designed for another era, but by learning how to guide systems that behave differently. Leaders still own decisions. They still carry responsibility. What changes is how intelligence is gathered, transformed and applied.
Next-generation leaders will not just manage people. They will manage networks of agents designed to augment human judgment and the verified intelligence those agents produce.
Leadership is no longer measured by span of control. It is measured by the ability to sense what matters, verify what is true and act decisively before risk spreads beyond reach.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?