Technology

Adversaries hijacked AI security tools at 90+ organizations. The next wave has write access to the firewall

Adversaries have been targeting organizations by injecting malicious prompts into legitimate AI tools, resulting in the theft of credentials and cryptocurrency. This alarming trend has affected over 90 organizations in 2025, highlighting the vulnerability of AI systems to malicious attacks. The compromised tools were able to read data but lacked the capability to rewrite firewall rules, creating a significant security gap.

However, a new wave of autonomous SOC agents is now being deployed, offering the ability to rewrite infrastructure and remediate security issues. These advanced agents have the power to modify firewall rules, IAM policies, and quarantine endpoints with their own privileged credentials, all through approved API calls that may be classified as authorized activity by EDR systems. This presents a concerning escalation in the capabilities of malicious actors, who can now exploit these autonomous agents to carry out destructive actions without directly interacting with the network.

Major companies like Cisco and Ivanti have introduced autonomous security solutions that leverage AI to enhance firewall remediation and compliance capabilities. These tools are designed to proactively address security threats and ensure regulatory compliance in real-time. However, the rapid advancement of AI technology is outpacing the governance mechanisms necessary to prevent misuse and exploitation.

The rise of AI-enabled adversaries has led to an 89% increase in cyber operations targeting AI systems. Malicious actors have already exploited vulnerabilities in AI workflows by deploying malicious MCP server clones that impersonate trusted services. The U.K. National Cyber Security Centre has warned that prompt injection attacks against AI applications pose a significant challenge, as these tools now have the ability to write, enforce, and remediate security controls.

To address these emerging threats, organizations must implement robust governance frameworks to govern autonomous agents effectively. The OWASP Agentic Top 10 identifies 10 categories of attack against autonomous AI systems, highlighting the need for enhanced security controls to mitigate risks. Security leaders must ensure that autonomous agents are scoped with minimum required permissions, monitored for anomalous behavior, and audited regularly to prevent misuse and abuse.

Continuous Compliance and the Neurons AI self-service agent from Ivanti offer automated enforcement frameworks for patch management and ITSM, respectively. These tools streamline security operations, reduce manual effort, and enhance overall security posture. By integrating governance controls into autonomous platforms, organizations can mitigate the risks associated with AI-enabled adversaries and safeguard their critical assets from malicious attacks.

In conclusion, the rapid evolution of AI technology presents both opportunities and challenges for organizations. To stay ahead of cyber threats, security teams must prioritize governance and compliance when deploying autonomous agents. By conducting regular audits, implementing policy enforcement mechanisms, and ensuring data context validation, organizations can effectively mitigate the risks associated with AI-enabled adversaries and secure their digital assets in an increasingly complex threat landscape.

Related Articles

Back to top button