BN
TechAI Desk2 views

Google Thwarts AI-Powered Hackers Planning Mass Exploits

Google's Threat Intelligence Group (GTIG) reported thwarting a sophisticated hacking attempt that aimed to use AI models for a mass vulnerability exploitation event. The hackers were reportedly using AI to find and exploit a zero-day flaw to bypass two-factor authentication. The report highlights that groups linked to China and North Korea showed significant interest in using AI for vulnerability discovery. This incident underscores the escalating threat, as hackers are weaponizing tools like OpenClaw against major organizations. Industry leaders, including Anthropic and OpenAI, have responded by restricting or previewing advanced AI models due to these security risks.

Ad slot
Google Thwarts AI-Powered Hackers Planning Mass Exploits

Google's Threat Intelligence Group (GTIG) reported on Monday that it successfully thwarted an attempt by malicious actors to use artificial intelligence models for a large-scale vulnerability exploitation operation.

Details of the Threat

The GTIG stated it has "high confidence" that it intercepted hackers utilizing an AI model. This model was reportedly used to discover and exploit a zero-day vulnerability—a software flaw unknown to developers—specifically to bypass two-factor authentication (2FA).

  • Objective: The criminal threat actor intended to use the exploit in a "mass exploitation event."
  • Intervention: Google's proactive counter-discovery efforts are credited with potentially preventing the exploit's use.
  • Scope: Google did not disclose the name of the hacking group involved, nor did it confirm that its proprietary Gemini model was compromised.

Industry Concerns Over AI Exploitation

The findings underscore a growing and significant threat: hackers are increasingly leveraging readily available AI tools, such as OpenClaw, to exploit software weaknesses. These methods pose substantial risks to corporations, government agencies, and other organizations, despite massive investments in cybersecurity defenses.

Ad slot

Geopolitical Links and Industry Response

Google's report specifically noted the involvement of state-linked groups:

  • Groups associated with China and North Korea demonstrated "significant interest in capitalizing on AI for vulnerability discovery."

This heightened concern has prompted reactions across the tech industry:

  • Anthropic: Delayed the rollout of its Mythos model due to fears that adversaries could use it to target decades-old software vulnerabilities. The model has since been released to select testers, including Apple, CrowdStrike, Microsoft, and Palo Alto Networks.
  • OpenAI: Announced that a variation of its latest model, GPT-5.5-Cyber, is entering a limited preview phase for vetted cybersecurity teams.

Google's report provided several examples illustrating how these tools are already being used to launch cyberattacks and develop malware.

Ad slot