Infostealers added Clawdbot to their target lists before most security teams knew it was running
Clawdbot, now rebranded as Moltbot, has recently come under fire for its significant security vulnerabilities. The AI agent’s MCP implementation lacks mandatory authentication, allowing for prompt injection and granting shell access by design. These flaws were detailed in a recent VentureBeat article, which sparked further investigation by security researchers. What they found was alarming.
Commodity infostealers wasted no time in exploiting these vulnerabilities. RedLine, Lumma, and Vidar quickly added Clawdbot to their target lists, infiltrating environments before security teams were even aware of its presence. Shruti Gandhi, a general partner at Array VC, reported a staggering 7,922 attack attempts on her firm’s Clawdbot instance.
A closer look at Clawdbot’s security posture revealed even more concerning issues. SlowMist issued a warning on January 26, highlighting that hundreds of Clawdbot gateways were exposed to the internet, providing easy access to API keys, OAuth tokens, and private chat histories without the need for credentials. Archestra AI CEO Matvey Kukuy was able to extract an SSH private key via email in a mere five minutes using prompt injection.
Dubbed “Cognitive Context Theft” by Hudson Rock, the malware targeting Clawdbot goes beyond stealing passwords to gather psychological profiles, work habits, trusted contacts, and personal anxieties. This wealth of information provides attackers with everything they need for sophisticated social engineering attacks.
The widespread adoption of Clawdbot as a personal assistant led to a surge in its popularity, with many users overlooking critical security considerations. The AI agent’s default settings left port 18789 open to the public internet, making it an easy target for cybercriminals. Red-teaming firm Dvuln discovered numerous exposed instances through a quick scan on Shodan, some of which had no authentication measures in place, allowing for full command execution.
A supply chain attack on ClawdHub’s skills library further demonstrated the vulnerabilities present in the system. By uploading a benign skill and artificially inflating the download count, an attacker was able to reach developers in multiple countries within hours. These developers unknowingly installed potentially malicious code, highlighting the lack of moderation and vetting in Clawdbot’s ecosystem.
Clawdbot’s plaintext storage of sensitive data in memory files poses a significant risk, as VPN configurations, credentials, and other confidential information are left unencrypted on disk. Without proper encryption or containerization, local-first AI agents create a new data exposure class that traditional endpoint security measures are ill-equipped to handle.
The security implications of AI agents like Clawdbot go beyond traditional defenses, as prompt injections and other malicious activities may go undetected by standard security tools. As the adoption of AI agents continues to rise, security teams must adapt quickly to address the evolving threat landscape.
In light of these findings, security leaders must adopt a new mindset when it comes to AI agents. Agents should be treated as critical infrastructure rather than mere productivity tools, with a focus on inventory management, provenance verification, least privilege enforcement, and runtime visibility.
Clawdbot’s rapid rise to fame and subsequent security vulnerabilities serve as a stark reminder of the dangers posed by unchecked AI agents. As the threat landscape continues to evolve, security teams must stay vigilant and proactive in mitigating risks to protect sensitive data and infrastructure.



