The Online Safety Act Is Here

But the Internet Still Isn’t Safe

The UK’s Online Safety Act has officially come into effect, marking what was supposed to be a new era of accountability for online platforms. Designed to protect children and vulnerable users from harmful content, it promises to hold tech giants responsible for what’s shared, seen, and spread online.

But here’s the uncomfortable question — if that’s true, then why can I still see people being murdered on TikTok?

In the same week that the law came into force, graphic videos like the Charlie Kirk shooting and the attack on Iryna Zarutska circulated freely on social media — with millions of views before being removed. Meanwhile, if you’re an adult trying to access explicit content legally, you now need to verify your age with your passport or use a VPN.

It makes you wonder: what kind of internet safety are we really enforcing?

The Internet’s Priorities Are Upside Down

Let’s be clear — pornography has its own complex set of social and moral issues, and age verification is a reasonable idea in principle. But the double standard is staggering.

How can platforms instantly restrict adult content but still allow real acts of violence to circulate freely?

As harmful as porn can be when misused, a violent death is not just “mature content” — it’s trauma, it’s real people’s suffering, it’s grief being turned into clicks.

And frankly, if a child were to stumble across something online, I’d rather it be something confusing or inappropriate than something life-altering and horrifying.
One can be explained. The other can’t be unseen.

A Decade of “Safer Internet” Promises

Ten years ago, the web was far more chaotic. Anyone who grew up online will remember the viral shock sites, dark corners of forums, and the ease with which you could accidentally stumble onto the worst things imaginable.

Since then, platforms like YouTube, Instagram, and Facebook have improved. Content moderation has become stricter; graphic material is removed more quickly, and algorithms are better at recognising harmful imagery.

But TikTok — the app dominating youth culture — still lags behind.

Yes, certain words are censored, certain videos flagged. But violent and distressing clips still slip through, shared and reshared under misleading titles, edited to avoid detection. Sometimes, they’re even reposted by news accounts chasing engagement, blurring the line between journalism and voyeurism.

It’s not just a moderation issue — it’s a moral one.

The Right to Dignity in Death

There’s another uncomfortable layer to this conversation.

If I were to die — especially violently, unexpectedly — I wouldn’t want footage of my final moments plastered across social media for strangers’ curiosity. None of us would.

And yet, that’s exactly what happened to Iryna Zarutska, whose death was turned into viral content before her family could even process what had happened.

Public conversation about tragedy is one thing. But distributing and replaying those final moments crosses a line — from information to exploitation.

There has to be a difference between bearing witness and building clicks.

The Promise vs. the Reality

The Online Safety Act is a step in the right direction. It gives Ofcom new powers to fine and regulate platforms that fail to protect users from harmful content. But laws alone can’t fix what’s become a cultural problem — the normalisation of violence in our feeds.

For safety to mean anything, platforms must apply their rules consistently. If algorithms can detect nipples, they can detect gunfire. If uploads are screened for copyright, they can be screened for trauma.

The technology exists — what’s missing is the will to use it properly.

Reclaiming Humanity Online

Maybe the real question isn’t just how do we make the internet safe for children? but rather, how do we make it humane again?

Somewhere along the way, we started treating tragedy as content. Violence became shareable, death became data, and empathy became secondary to engagement.

If the Online Safety Act is going to mean anything, it has to address that imbalance — not just with censorship, but with care.

Because safety isn’t just about what you can’t see. It’s about what we, collectively, refuse to normalise.

In a world where algorithms decide what’s “acceptable”, maybe the bravest thing we can do is demand an internet that remembers what it means to be human.