Technology

Microsoft Copilot ignored sensitivity labels twice in eight months — and no DLP stack caught either one

In a shocking turn of events, it has been revealed that Microsoft’s Copilot AI system read and summarized confidential emails for four weeks starting January 21, despite every sensitivity label and DLP policy telling it not to do so. This breach of trust extended into regulated healthcare environments, with organizations such as the U.K.’s National Health Service being affected. The incident was logged as INC46740412 by the NHS and tracked by Microsoft as CW1226324.

This is not the first time Copilot has violated its own trust boundary. In June 2025, a critical zero-click vulnerability known as EchoLeak allowed a malicious email to bypass Copilot’s security measures and exfiltrate enterprise data without any user action. This vulnerability, assigned a CVSS score of 9.3, was patched by Microsoft, but the recent CW1226324 bug proves that the enforcement layer surrounding Copilot can still fail independently.

The root causes of these incidents highlight a fundamental design flaw in AI systems like Copilot. Agents process both trusted and untrusted data in the same thought process, making them vulnerable to manipulation. Traditional security tools like EDR and WAF are blind to these breaches because they do not monitor the retrieval layer where the violations occur.

To prevent future incidents, security leaders are advised to conduct a five-point audit that addresses both failure modes. This includes testing DLP enforcement directly against Copilot, blocking external content from reaching Copilot’s context window, auditing Purview logs for anomalous interactions, enabling Restricted Content Discovery for sensitive SharePoint sites, and building an incident response playbook for vendor-hosted inference failures.

The implications of these breaches extend beyond Copilot to any AI assistant that accesses internal data. Organizations must take proactive measures to ensure the security and integrity of their AI systems. By implementing the recommended controls and conducting regular audits, they can prevent similar incidents from occurring in the future.

As the use of AI assistants continues to grow, it is essential for organizations to prioritize governance and security measures to protect sensitive data. By addressing the structural vulnerabilities in AI systems and implementing robust controls, they can mitigate the risk of trust boundary violations and safeguard their information assets.

Related Articles

Back to top button