AI-Generated Deepfake Scandal Puts Global Regulators on Edge
Governments across the world are scrambling to respond after social media platform X was inundated with a surge of AI-generated, non-consensual nude images linked to its Grok chatbot. Over the past two weeks, the manipulated images have targeted a wide spectrum of women, ranging from models and actresses to journalists, crime victims and even international political figures. Research released on December 31 by Copyleaks initially suggested the images were appearing at a rate of one per minute, but later sampling revealed a far more alarming scale, with thousands posted every hour.
The incident has ignited fierce criticism of X and its owner, Elon Musk, particularly over the decision to deploy Grok without robust safeguards. While public outrage has been swift, regulatory responses have exposed the limits of existing technology laws. Observers say the episode has become a sobering case study in how rapidly evolving artificial intelligence tools can outpace oversight frameworks designed to protect individuals from digital harm.
The European Commission has taken the strongest early step, ordering xAI to preserve all documentation related to Grok’s development and deployment. Although the move does not automatically signal a formal investigation, it is widely seen as a precursor to deeper scrutiny. The action has drawn added weight from reports suggesting that safeguards restricting image generation may have been deliberately withheld. Meanwhile, X has removed Grok’s public media tab and issued statements condemning the use of its tools for illegal content, particularly child sexual abuse material.
Regulators in other regions have followed with warnings and preliminary assessments. In the United Kingdom, Ofcom confirmed it is in contact with xAI and is conducting a rapid compliance review, a position publicly backed by Prime Minister Keir Starmer, who described the spread of the images as “disgraceful.” Australia’s eSafety Commissioner reported a sharp rise in complaints linked to Grok since late 2025, noting that her office is considering the full range of regulatory options available.
India now represents the most significant potential flashpoint. Following a formal complaint by a member of parliament, the country’s technology regulator ordered X to submit a detailed report on actions taken to address the issue. Although the company has responded, authorities have yet to indicate whether the measures will be deemed sufficient. Failure to satisfy regulators could result in X losing its legal protections in one of its largest markets, underscoring the growing global stakes of AI governance in the digital age.
Source: Techcrunch
news via inbox
Get the latest updates delivered straight to your inbox. Subscribe now!

