The UK’s media regulator Ofcom has launched an investigation into X (formerly Twitter) after serious concerns emerged around, the platform’s AI chatbot, and its ability to generate sexualised imagery in public spaces.

The move marks a pivotal moment for online regulation in the UK, testing how the Online Safety Act applies not just to user posts, but to AI-generated content produced by platforms themselves

At the centre of the investigation is a simple but uncomfortable question:

If an AI creates harmful content in response to user prompts, who is responsible?

What Is Grok — and Why Is It a Problem?

Grok is X’s generative AI assistant, designed to respond conversationally and generate images when prompted. Unlike standalone AI tools, Grok is embedded directly into a fast-moving social platform, where content is instantly visible, shareable, and difficult to contain once it spreads.

Concerns escalated when users began publicly tagging Grok and requesting explicit or sexualised modifications to images — requests that, in some cases, the system appeared to respond to.

This wasn’t happening behind closed doors. It was happening in full public view

The Tweets That Triggered Alarm Bells

What has made this case particularly troubling for regulators is the visibility and tone of the prompts themselves. Users openly treated Grok as a novelty feature, testing how far they could push it — often joking about the results.

Some of the real tweets directed at Grok included:

“@grok replace give her a dental floss bikini.”

“Told Grok to make her butt even bigger and switch leopard print to USA print. 2nd pic I just told it to add cum on her ass lmao.”

“@grok Put her into a very transparent mini-bikini.”

These were public tweets, not private messages — meaning anyone scrolling the platform could encounter both the prompts and the outputs, including children and vulnerable users.

That visibility is crucial. It’s one thing for harmful content to slip through moderation after upload; it’s another when a platform’s own AI is being prompted to generate sexualised imagery in real time.

 

Why Ofcom Is Stepping In

Under the Online Safety Act, platforms operating in the UK have a legal duty to prevent the creation and spread of harmful content, particularly where it may affect children.

Ofcom’s investigation signals a clear stance:
AI systems are not exempt from safety obligations.

If a platform chooses to deploy a generative AI tool, it must ensure that tool has adequate safeguards — not just in theory, but in practice.

This case moves beyond traditional content moderation and into new territory:

  • AI as a content creator, not just a tool
  • Platform accountability for AI outputs
  • The limits of “user misuse” as a defence

The Design Problem, Not Just a Moderation One

What stands out isn’t just that these prompts existed — it’s how easily they could be issued and amplified.

Tagging Grok worked like summoning a feature, not challenging a system with guardrails. The casual tone of the tweets suggests users didn’t expect resistance, and in some cases, didn’t get it.

That points to a design failure, not just a moderation gap.

If an AI can be publicly prompted to sexualise imagery with minimal friction, then safety hasn’t been built into the system — it’s been bolted on after the fact.

 

AI, Accountability and Human Cost

There’s also a human dimension that often gets lost in AI debates.
Behind every image being modified or sexualised is a real person — whether they consented to that use or not.

When AI accelerates this process, harm scales faster than oversight can keep up.

The question isn’t whether AI can generate images — it’s whether platforms are prepared to take responsibility when it does so in harmful ways.

What Happens Next

Ofcom’s investigation could lead to:

  • Enforcement action against X
  • Financial penalties
  • Mandatory changes to Grok’s safeguards
  • Clearer regulatory expectations for AI tools on social platforms

More broadly, it may set a precedent for how AI-generated content is treated under UK law — something regulators worldwide are watching closely.

Final Thoughts

The investigation into X and Grok isn’t about stifling innovation. It’s about recognising that AI embedded in public platforms carries public consequences.

When sexualised imagery can be generated at the tap of a button, visible to millions, regulation stops being abstract — it becomes necessary.

The Online Safety Act was designed for moments like this.
Now, it’s being tested.

Because if platforms want to deploy powerful AI systems at scale, they must also accept responsibility for what those systems produce — not after the damage is done, but before it happens.