Business

Anthropic’s ‘Claude Mythos’ model sparks fears of AI doomsday, wave of devastating hacks

Anthropic has raised concerns by highlighting the alarming capabilities of their new AI model, “Claude Mythos.” Executives caution that if released to the public, Mythos could lead to a series of catastrophic hacks and terror attacks.

Anthropic’s own analysis revealed that Mythos, if misused, could exploit critical infrastructure such as electric grids, power plants, and hospitals. The AI model has already identified numerous high-severity vulnerabilities across major operating systems and web browsers.

Critics accuse Anthropic CEO Dario Amodei of leveraging safety concerns to promote the company’s products. Bloomberg via Getty Images

Instead of a widespread release, Anthropic, under the leadership of CEO Dario Amodei, has introduced “Project Glasswing.” This initiative aims to provide Mythos to a select group of about 40 companies, including tech giants like Amazon, Google, and Apple, for early access to identify and address security flaws.

The exclusive rollout to corporations is seen as a way for Anthropic to allow these companies to patch vulnerabilities without giving hackers the opportunity to exploit them further. However, AI safety researcher Roman Yampolskiy warns that leaks are inevitable, and the development of such advanced AI models could lead to the creation of dangerous tools and weapons.

Yampolskiy emphasizes the potential risks associated with models like Mythos, which could be used to develop harmful tools beyond cybersecurity threats. He points out instances where Mythos breached secure systems and uncovered long-hidden flaws.

Despite the potential dangers, Anthropic argues that Project Glasswing will enhance defensive capabilities against cyber threats from countries like Iran, China, and Russia. The company is actively collaborating with US government officials to explore how Mythos can bolster the nation’s cyber capabilities.

While Mythos represents a technological advancement, critics question Anthropic’s motives and the actual risks posed by the AI model. Some suggest that the limited release of Mythos may be more related to the company’s computational limitations rather than genuine safety concerns.

CEO of Anthropic Dario Amodei speaking at the AI Impact Summit in New Delhi, India. REUTERS

Amidst debates about regulatory capture and responsible AI development, Anthropic defends Project Glasswing as a collaborative effort with leading tech companies to enhance cybersecurity measures. The company emphasizes its support for open-source initiatives and contributions to organizations like the Linux Foundation and Apache Software Foundation.

Notably, the concerns surrounding the release of advanced AI models are not new, as seen in previous cases like OpenAI’s cautious approach with GPT-2. The decision to limit Mythos’ availability may reflect challenges in meeting computational demands rather than just safety considerations.

Related Articles

Back to top button