Ofcom: Online crackdown working; more action on way

Tech companies will be forced take action to stop illegal content from going viral, prevent terrorism content and explicit deepfakes at source and stop children from being groomed through livestreams, under new proposals from Ofcom.

The new measures continue Ofcom’s implementation of the Online Safety Act and build on its illegal harms and children’s safety codes of practice, which are already in place and being enforced.

Earlier this month, it launched investigations into whether seven file-sharing services, 4chan and porn provider First Time Videos have failed to comply with their duties under the Act.

The regulator says it is keeping pace with developments and listening to the feedback and evidence it has received, and is now pushing platforms to go further by proposing additional measures to strengthen existing codes.

Ofcom maintains that if illegal content spreads rapidly online, it can lead to severe and widespread harm, especially during a crisis, like the violent riots that followed the Southport murders last year, or if a terrorist attack is livestreamed. Recommender systems can exacerbate this.

To prevent this from happening, platforms should have protocols in place to respond to spikes in illegal content during a crisis, and should not recommend material to users where there are indicators it might be illegal, unless and until it has been reviewed.

If a site or app allows livestreaming, it should have a system which makes it clear to them when a user reports a livestream where there is a risk of imminent physical harm, and have human moderators available at all times to review content and take action in real-time.

With huge volumes of content appearing online every day, Ofcom is calling on providers to make effective use of technology to make their sites and apps safer by design and prevent illegal material from reaching users. They should use a technique called hash matching to detect terrorism content and intimate images that are shared without consent, such as explicit deepfakes.

Ofcom is also proposing that some services should assess the role automated tools can play in detecting content, including previously undetected child sexual abuse material, content promoting suicide and self-harm and fraudulent content – and use them where they are available and effective.

In addition, the regulator is propisng further restrictions on livestreaming, which it concedes has many benefits – for gaming, showcasing talents, citizen-journalism or sharing real-world experiences. However, children risk being groomed, coerced into performing sexual acts, or encouraged into acts of self-harm and suicide while livestreaming – this must change Ofcom warns.

It is now proposing that sites and apps should prevent people from posting comments or reactions or sending gifts to children’s livestreams, and they should prevent people from recording children’s livestreams.

Under its existing codes, providers should already be taking steps to protect children from grooming. Now that it has published its guidance on highly effective age assurance, platforms should use robust age checks to underpin the measures they take to protect children from grooming and harms associated with livestreaming.

It also expects companies to ban users who share child sexual exploitation and abuse material.

Ofcom online safety group director Oliver Griffiths said: “Important online safety rules are already in force and change is happening. We’re holding platforms to account and launching swift enforcement action where we have concerns.

“But technology and harms are constantly evolving, and we’re always looking at how we can make life safer online. So today we’re putting forward proposals for more protections that we want to see tech firms roll out.”

Related stories
Social media giants face ‘full force of Online Safety Act’
Three-pronged probe into abuse of children’s privacy
Social media giants warned over children’s privacy
Hands off our kids’ data, ICO warns social media giants
Social media giants cough up €3bn for privacy failings
TikTok insists ‘we’ve changed’ following €345m EU fine