Ofcom has outlined 40+ safety measures that digital platforms need to adhere to make the internet “much safer” for children.

The online watchdog says it crafted the proposed measures to enable kids to enjoy the benefits of online services while shielding them from some of the “serious harms” that exist.

There are over 40 safeguards, which Ofcom breaks down into three major components: robust age checks, safer algorithms and effective moderation.

The regulator wants “age assurance” to ensure everyone using an online service is old enough to sign up for an account.

While privacy advocates have pushed back against draconian identity and age checks, Ofcom believes new AI-powered technology could offer a solution.

However, Surrey University professor Alan Woodward believes advanced checks might not have the desired effect.

In an interview with the BBC, he added: “They’ll find ways around it, whether it’s using VPNs (virtual private networks) to go via routes where it doesn’t require that or where they can sign on with somebody else’s details.”

Either way, it’s clear that the current system of checking a box is not robust enough.

Ofcom also wants platform providers to design algorithms that filter out harmful content while providing teenagers with the tools to provide timely, negative feedback.

This point segues into the watchdog’s call for better moderation with systems capable of acting quickly to jettison content that has been flagged as problematic.

Ofcom has now opened a consultation on the measures, remaining open until 17 July.

It will then finalise the proposals ahead of an official statement in Spring 2025.