X's Grok AI Faces New Curbs After Fake Sexual Images Scandal
X imposes new restrictions on Grok AI chatbot

Social media platform X has been forced to implement new safety restrictions on its Grok artificial intelligence chatbot following a major scandal involving the creation of fake sexual images.

Public Outcry Forces Platform U-Turn

The drastic move comes after widespread reports emerged of users manipulating the AI tool to generate and sexualise images of real people, including women and children, without their consent. This sparked significant public anger and political pressure.

Technology Secretary Liz Kendall has declared that the Labour Government will ensure strict compliance with the Online Safety Act. She welcomed X's policy shift but emphasised that the media regulator, Ofcom, must continue its robust investigation into the platform's initial failures.

"I will not rest until every social media company meets its legal duties to protect users," Kendall stated, signalling a tough stance against powerful tech firms.

Details of the New Safety Measures

The newly imposed restrictions specifically block Grok from generating or manipulating sexualised imagery of individuals. A key part of the strategy involves geoblocking—preventing the generation of images of people in revealing clothing in countries where such content is illegal.

However, experts and campaigners have immediately raised concerns about the effectiveness of this approach. They warn that tech-savvy users could easily circumvent the regional blocks by using a Virtual Private Network (VPN), rendering the safeguards potentially useless.

Ongoing Scrutiny and Campaigner Demands

Despite the update, Ofcom has confirmed its investigation into X will proceed. The regulator is seeking definitive answers on how the original safeguards failed and is focused on ensuring permanent fixes are in place to prevent a repeat incident.

Government insiders view the policy reversal as a victory for Prime Minister Keir Starmer, who had previously condemned the AI abuse as "disgusting." The Labour Party asserts its commitment to using the full force of the law to hold technology giants accountable.

Campaign groups have reacted with caution. The End Violence Against Women Coalition argued that the episode proves tech platforms cannot be trusted to self-regulate. They are calling for more proactive government intervention to ensure companies cannot profit from or facilitate online abuse.

The situation highlights the ongoing challenges of regulating rapidly evolving AI technology within existing legal frameworks designed for human-generated content.