Claude Code, Copilot and Codex all got hacked. Every attacker went for the credential, not the model.
The recent wave of exploits targeting AI coding agents has raised significant concerns about the security of these powerful tools. In a series of incidents over the past nine months, researchers have uncovered vulnerabilities in popular AI coding agents, including Codex, Claude Code, Copilot, and Vertex AI. These exploits all shared a common pattern: the unauthorized access and misuse of credentials by AI agents, leading to potential security breaches.
The first major incident occurred when BeyondTrust researchers discovered a critical vulnerability in Codex that allowed a crafted GitHub branch name to steal Codex’s OAuth token in cleartext. This flaw could potentially expose sensitive information to malicious actors. OpenAI swiftly addressed the issue by implementing full remediation measures.
Subsequently, Anthropic’s Claude Code faced multiple security vulnerabilities, including two CVEs and a 50-subcommand bypass that allowed for the bypassing of sandbox restrictions. Adversa found that Claude Code ignored its own deny rules once a command exceeded 50 subcommands, compromising the security of the platform. These vulnerabilities were promptly patched by the developers.
In another incident, researchers demonstrated how GitHub Copilot could be exploited to execute arbitrary code through hidden instructions in pull request descriptions and GitHub issues. Microsoft patched the vulnerability following the discovery by security researchers. Similarly, Orca Security uncovered a flaw in Copilot that allowed for the exfiltration of privileged credentials, leading to a full repository takeover.
Unit 42 researchers also identified a vulnerability in Vertex AI, where default Google service identities attached to every agent had excessive permissions. This flaw could potentially grant unauthorized access to sensitive data and Google’s infrastructure, posing a significant security risk.
The series of exploits targeting AI coding agents highlights the critical need for improved security measures in these platforms. Security experts emphasize the importance of inventorying and auditing AI agents, monitoring for potential vulnerabilities, and implementing strict governance policies for agent identities. By treating AI agent identities with the same level of scrutiny as human privileged identities, organizations can mitigate the risk of unauthorized access and potential security breaches.
In conclusion, the governance gap in securing AI coding agents underscores the need for enhanced security measures and oversight in the use of these powerful tools. As the adoption of AI agents continues to grow, it is essential for organizations to prioritize security and implement robust measures to protect against potential exploits and breaches.



