Technology

AI agents are running hospital records and factory inspections. Enterprise IAM was never built for them.

In today’s rapidly evolving technological landscape, agentic AI is becoming increasingly prevalent in various industries. From healthcare to manufacturing, autonomous agents are being deployed to streamline processes and improve efficiency. However, a significant roadblock preventing widespread adoption of agentic AI is the lack of proper identity governance.

The trust gap in agentic AI deployments stems from the inability of enterprises to effectively manage and control the identities of these non-human agents. This issue was highlighted by Cisco President Jeetu Patel, who revealed that while 85% of enterprises are running agent pilots, only 5% have successfully transitioned to production. The primary concern for CISOs is identifying which agents have access to sensitive systems and holding them accountable for any unauthorized actions.

A recent study by IANS Research found that most businesses lack the necessary role-based access control mechanisms to manage human identities effectively, let alone non-human agents. This gap in identity governance poses a significant security risk, as evidenced by the 2026 IBM X-Force Threat Intelligence Index, which reported a 44% increase in attacks exploiting public-facing applications due to missing authentication controls.

Michael Dickman, SVP and GM of Cisco’s Campus Networking business, emphasized the need for a trust framework that prioritizes security from the outset of agentic AI deployments. Unlike previous technology transitions where security was an afterthought, Dickman argues that trust should be a foundational requirement for deploying autonomous agents.

Dickman identified four key conditions for establishing trust in agentic AI deployments. These include secure delegation, cultural readiness, token economics, and human judgment. By defining clear permissions for each agent, ensuring organizational readiness for autonomous workflows, managing computational costs, and incorporating human oversight, enterprises can build a robust trust framework for their AI deployments.

One of the critical components of building trust in agentic AI is cross-domain visibility. Dickman emphasized the importance of unifying network, security, and application telemetry into a shared data fabric to enable cross-domain correlation. By breaking down silos and creating a comprehensive view of system-to-system communications, organizations can enhance their ability to enforce agent policies effectively.

To address the trust gap in agentic AI deployments, enterprises must focus on five key priorities. These include forcing cross-functional alignment, preparing IAM and PAM governance for agents, adopting a platform approach to networking infrastructure, designing hybrid architectures, and ensuring the first use cases are bulletproof on trust. By following these guidelines, organizations can accelerate their transition from pilot to production deployments of autonomous agents.

In conclusion, the trust gap in agentic AI deployments is a critical challenge that must be addressed to unlock the full potential of autonomous agents. By prioritizing identity governance, cross-domain visibility, and policy enforcement, enterprises can build a solid trust framework that enables secure and efficient deployment of AI agents. Only by focusing on these foundational elements can organizations successfully navigate the complexities of deploying autonomous agents in today’s digital landscape.

Related Articles

Back to top button