Technology

Seven steps to AI supply chain visibility — before a breach forces the issue

The integration of artificial intelligence (AI) in enterprise applications is on the rise, with four in 10 applications expected to feature task-specific AI agents this year. However, despite the increasing use of AI, a mere 6% of organizations have advanced AI security strategies in place, according to Stanford University’s 2025 Index Report.

As AI continues to evolve, so do the threats associated with it. Palo Alto Networks predicts that 2026 will see the first major lawsuits holding executives personally liable for rogue AI actions. With the accelerating and unpredictable nature of AI threats, organizations are struggling to contain these risks. Traditional governance measures such as increased budgets or headcount are not sufficient to address these challenges.

One of the key issues facing organizations is the visibility gap in understanding how AI models are being used or modified across different departments and tools. Without proper visibility, AI security becomes a guessing game, making incident response extremely difficult. The lack of consistent improvement in this area poses a significant risk to AI security.

To address these challenges, organizations need to focus on implementing Software Bill of Materials (SBOMs) for AI models. The U.S. government has already mandated SBOMs for all software acquired for use, and the same level of rigor is needed for AI models. However, the adoption of AI-specific SBOMs lags behind, posing a significant risk to organizations.

A recent survey by Harness revealed that 62% of security practitioners have no way of determining where AI models are being used within their organizations. This lack of visibility leads to increased risks such as prompt injection, vulnerable code, and jailbreaking. Despite significant investments in cybersecurity software, many organizations are still vulnerable to these attacks due to the lack of visibility into their AI models.

IBM’s 2025 Cost of a Data Breach Report found that 13% of organizations reported breaches of AI models or applications, with 97% of those breached lacking proper AI access controls. Shadow AI, or unauthorized AI use, was responsible for one in five reported breaches, costing organizations significantly more than traditional intrusion counterparts.

To address these issues, organizations need to focus on implementing AI-specific SBOMs and improving visibility into their AI supply chains. By building a model inventory, using advanced techniques to manage shadow AI, and requiring human approval for production models, organizations can better secure their AI models. Additionally, mandating SafeTensors for new deployments and piloting ML-BOMs for high-risk models can help improve security.

In conclusion, 2026 will be a year of reckoning for AI SBOMs, as securing AI models becomes a boardroom priority. With the EU AI Act already in effect and cyber insurance carriers closely monitoring AI governance practices, organizations must prioritize visibility and compliance in their AI supply chains to ensure safe and secure AI implementation.

Related Articles

Back to top button