Where man is not, AI is barren.

Analysis: The Online Safety Act – legislation and backlash
Home  ➔  Articles   ➔   Analysis   ➔   Behind the News   ➔   Analysis: The Online Safety Act – legislation and backlash
cyborg xray
The UK Online Safety Act introduces strict regulations for digital platforms, compelling them to protect users from harmful content, ensure robust moderation, and face penalties for failing to safeguard online safety effectively.

The Act: what it is and what it does

The UK Online Safety Act is in the news due to the recent rollout of mandatory age verification rules, which came into force on 25 July 2025, sparking widespread debate over privacy, censorship, and children’s online protection. The Act, which received Royal Assent on 26 October 2023, is once more in the news. It reshapes how responsibility for online content is handled in the UK by making social media, search engines, and other online services legally accountable for user safety. Its provisions were phased in, with new criminal offences such as cyber‑flashing and encouraging self‑harm taking effect in January 2024, broader safety duties beginning in March 2025, and the most stringent children’s protections and age‑verification rules fully in force from 25 July 2025.

The Act applies to a wide range of online services, including social media, messaging apps, video‑sharing sites, dating services, and cloud storage, even if companies are based outside the UK, as long as their services are used by British users or could cause them significant harm. A core principle is “safe by design,” meaning platforms must proactively build systems to detect and reduce risks rather than simply react to harmful content. Algorithms and platform features must be managed to avoid repeatedly pushing dangerous material, particularly to children. Protecting children is at the heart of the law: platforms must block access to the most harmful material, such as pornography, self‑harm, suicide, and eating disorder content, using effective age checks that minimise data collection. They must also shield young users from bullying, hate, dangerous stunts, and unwanted contact from strangers, while preventing attempts to bypass safety measures.

All providers are required to act against illegal content, including child sexual abuse, terrorism, fraud, and intimate image abuse. Larger platforms must also give adults greater control over what they see, without banning legal material, while respecting free expression and privacy. The law is enforced by Ofcom, which can issue fines of up to £18 million or 10% of global turnover, pursue senior managers for serious breaches, and even block non‑compliant services, making platforms clearly accountable for creating a safer online environment.

The Act at a Glance

The Online Safety Act 2023

What's the Goal?

To protect children and adults from illegal and harmful content online. The Act introduces a legal duty of care on tech companies, making them legally responsible for the safety of their users.

Who is Affected?

Services with UK users that host user-generated content or are search engines. This primarily includes:

  • Social Media Platforms
  • Messaging Apps
  • Online Gaming Sites
  • Search Engines
  • Online Marketplaces

The Regulator: Ofcom

Ofcom has strong enforcement powers, including:

  • Fines up to £18m or 10% of global turnover.
  • Blocking non-compliant services in the UK.
  • Holding senior managers criminally liable.

Key Facts & Timeline

Key details about the Act's journey into law:

  • Introduced by: The Conservative Party.
  • Royal Assent: 26 October 2023.
  • Implementation: Phased rollout by Ofcom from 2024 to 2026.

The New Rulebook: Key Duties for Platforms

Protect Children

Use age verification to prevent children from seeing harmful content (e.g., pornography, self-harm) and enforce age limits.

Remove Illegal Content

Proactively find and remove illegal material, especially terrorism and child sexual abuse content, and have easy reporting tools for users.

Empower Adults

Provide tools for adults to filter out content they don't want to see (like legal but harmful material) and control who can interact with them.

New Criminal Offences

The Act creates new criminal offences for harmful online behaviour, including sharing intimate images without consent (including deepfakes), cyberflashing, and encouraging serious self-harm.

Responses and Backlash

Although the Act is nearly two years old now, the age verification system has generated significant debate across government, technology firms, civil society, and international observers. Ministers have presented the law as a major advance in online safety, with Technology Secretary Peter Kyle describing it as the most substantial improvement to young people’s online experience since the internet’s inception (and kicking off a storm of protest by comparing opponents to the legislation, most notably Reform leader, Nigel Farage, to supporters of the notorious paedophile, Jimmy Savile). Prime Minister Sir Keir Starmer has emphasised that the provisions are aimed at child protection rather than restricting lawful speech.

The legislation has, however, prompted a range of criticisms. Technology companies such as X (formerly Twitter) and organisations including Big Brother Watch and the Open Rights Group argue that the act’s broad scope could lead to the restriction of legal content and limit political discussion. Examples of age verification preventing access to a Conservative MP’s speech on grooming gangs, as well as to forums related to alcohol misuse and pet care, have been cited as indications of potential overreach. Some US politicians have also expressed concern, referring to the act as an instance of online censorship, while the Trump administration has signalled possible diplomatic responses to the law’s perceived impact on American technology companies.

Data privacy has emerged as another key issue, as compliance often requires users to provide sensitive information such as photo identification, facial scans, or financial data. Civil liberties groups have warned that this creates new risks of data breaches and misuse, and may deter vulnerable users from seeking support online. While the act’s reach extends beyond pornography to gaming, social networks, and support forums, critics suggest that determined users could circumvent the measures using VPNs, potentially undermining its effectiveness. With Ofcom opening multiple investigations into platform compliance, and a petition surpassing 468,000 signatures calling for repeal, the legislation continues to be the subject of domestic and international scrutiny, illustrating the tension between enhancing online safety and maintaining digital freedoms.

This article was co-created with AI.