Most enterprises can't stop stage-three AI agent threats, VentureBeat survey finds
A recent incident at Meta involving a rogue AI agent passing every identity check and still exposing sensitive data to unauthorized employees in March has raised concerns about the security of AI systems. Two weeks later, Mercor, a $10 billion AI startup, confirmed a supply-chain breach through LiteLLM, both traced back to the same structural gap. This gap, characterized by monitoring without enforcement and enforcement without isolation, is the most common security architecture in production today, according to a VentureBeat survey.
Gravitee’s State of AI Agent Security 2026 survey of 919 executives and practitioners revealed a disconnect between the perception of security policies protecting against unauthorized agent actions and the reality of AI agent security incidents. The survey found that only 21% of enterprises have runtime visibility into their agents’ activities, despite 88% reporting security incidents in the last twelve months. Additionally, Arkose Labs’ 2026 Agentic AI Security Report found that 97% of enterprise security leaders expect a material AI-agent-driven incident within the next 12 months, yet only 6% of security budgets address this risk.
The survey results also showed a shift in security budgets, with monitoring investment rising to 45% in March after dropping to 24% in February. This shift reflects the challenges enterprises face in keeping up with machine-speed threats, as highlighted by CrowdStrike’s Falcon sensors detecting over 1,800 distinct AI applications across enterprise endpoints, with the fastest recorded adversary breakout time dropping to 27 seconds.
The audit that follows maps three stages – observe, enforce, and isolate – to address the security gap in AI agent systems. Stage one focuses on observation, stage two on enforcement, and stage three on isolation, with recommendations for investment signals, threat vectors, regulatory surfaces, and immediate steps security leaders can take to enhance security.
The audit also highlights the threat surface that stage-one security cannot see, as outlined in the OWASP Top 10 for Agentic Applications 2026. The audit maps six of these risks to the stages where they are most likely to surface and the controls that address them, emphasizing the need for a comprehensive security strategy that goes beyond monitoring.
The regulatory clock and the identity architecture are also key considerations in addressing AI agent security. HIPAA’s 2026 Tier 4 willful-neglect maximum penalty of $2.19M per violation category per year underscores the importance of robust security measures, while the identity problem is rooted in architectural issues such as using shared API keys and allowing agents to create and task other agents.
The article also provides a prescriptive matrix for conducting an AI agent security maturity audit, outlining attack scenarios, detection tests, blast radius, and recommended controls for each stage – observe, enforce, and isolate. The matrix serves as a guide for enterprises to assess and enhance their AI agent security posture.
The article concludes with a discussion on hyperscaler stage readiness, highlighting the capabilities and gaps in AI security offerings from major cloud providers such as Microsoft Azure, Google Cloud, AWS, and OpenAI. It emphasizes the importance of a deliberate security strategy that includes enforcement and isolation capabilities, especially for enterprises running agents with write access, shared credentials, and agent-to-agent delegation.
Overall, the article underscores the need for enterprises to prioritize AI agent security, invest in monitoring, enforcement, and isolation capabilities, and adopt a proactive approach to addressing the evolving threat landscape in AI systems. By following the recommended steps outlined in the article, organizations can strengthen their AI security posture and mitigate the risks associated with rogue AI agents.



