Agentic AI security breaches are coming: 7 ways to make sure it's not your firm
AI agents are becoming increasingly prevalent in enterprises, with up to 79% of surveyed companies implementing them, as reported by PwC. However, along with their efficiency, AI agents also bring new security risks. When an agentic AI breach occurs, companies often rush to assign blame to employees without addressing the systemic failures that allowed the breach to happen in the first place.
Forrester’s Predictions 2026: Cybersecurity and Risk report forecasts that the first agentic AI breach will result in dismissals, highlighting the pressure on CISOs and CIOs to deploy agentic AI quickly while managing risks. The report also anticipates increased geopolitical turmoil and tighter regulations on critical communication infrastructure.
CISOs are facing a challenging year ahead, particularly those in globally competitive organizations. The EU is expected to establish a vulnerability database, leading to a demand for regional security professionals. Quantum-security spending is also projected to rise, reflecting the urgency to address quantum-resistant cryptography.
The adoption of agentic AI introduces new security threats such as data exfiltration and autonomous misuse of APIs. Jerry R. Geisler III, CISO at Walmart Inc., emphasizes the importance of proactive security controls using AI Security Posture Management to ensure continuous risk monitoring and regulatory compliance.
Clearwater Analytics’ CISO, Sam Evans, highlights the risks associated with employees using AI tools improperly. To address this, Evans adopted enterprise browsers like Island to protect sensitive data. Boardrooms are now tasking CISOs with securing AI applications without hindering productivity or innovation.
Walmart’s CISO, Geisler, prioritizes innovation and a startup mindset to continually enhance security measures. He emphasizes modernizing identity and access management to simplify processes while maintaining security. VentureBeat observes companies like Walmart and Clearwater Analytics actively fortifying their cyber defenses against agentic AI threats.
Seven strategies emerge from discussions with CISOs for safeguarding against agentic AI threats. These include enhancing visibility, reinforcing API security, managing autonomous identities strategically, upgrading to real-time threat detection, embedding proactive oversight, adapting governance to AI’s rapid deployment, and engineering incident response ahead of threats.
As agentic AI reshapes the threat landscape, CISOs must proactively address governance gaps, API security, and identity management to stay ahead of evolving risks. Those who prioritize real-time monitoring, governance integration, and proactive incident response will gain a strategic advantage in managing cybersecurity risks effectively.


