Today: Jan 26, 2025

Britain’s online safety regime begins: A new era of accountability for big tech

Britain's Online Safety Act has officially come into force, requiring tech companies like Facebook, TikTok, and YouTube to tackle illegal content and prioritize user safety, with a compliance deadline of March 16, 2025. Ofcom will enforce strict measures, including fines up to £18 million or 10% of global revenue, to ensure platforms protect users—particularly children—from online harms.
Professional online gamer hand fingers stock photo | Pixabay
1 month ago

Britain’s landmark Online Safety Act officially entered its enforcement phase on Monday 16 December, marking a watershed moment for internet safety and signaling a firm crackdown on illegal online activity. 

This move places the onus squarely on social media giants like Meta’s Facebook, YouTube, TikTok, and other digital platforms to make their spaces “safer by design,” particularly for children and vulnerable users.

According to commentators, the message from the UK government and its regulator, Ofcom, is becoming clearer: The freewheeling days of unchecked harmful content online are over.

Under Ofcom’s first set of published guidelines, tech platforms now face an obligation to assess and mitigate the risks posed by illegal content—such as terrorism, child sexual abuse material (CSAM), and online fraud.

They have until March 16, 2025, to evaluate their risks and outline actionable plans to tackle such harms. After that deadline, implementation begins, with sweeping safety measures expected across platforms.

In recent years, mounting public pressure and troubling events—such as the riots earlier this year, believed to have been fueled by social media—pushed the government to accelerate online safety regulations. Additionally, rising concerns about children’s exposure to harmful content, cyberbullying, and sexual predators have reached a boiling point.

Ofcom CEO Dame Melanie Dawes summarized the stakes plainly: “For too long, sites and apps have been unregulated, unaccountable, and unwilling to prioritize people’s safety over profits. That changes from today.”

The Online Safety Act, signed into law in October 2023, represents one of the world’s most ambitious attempts to regulate digital platforms.

It mirrors similar regulatory efforts in the European Union (with the Digital Services Act) and Australia, but Britain’s law uniquely introduces significant criminal liability for senior executives in extreme cases of non-compliance.

What’s required of tech firms?

The Online Safety Act sets out a detailed framework of over 40 safety measures for platforms, which vary based on a company’s size, risk profile, and user base. While smaller services will have lighter requirements, no one escapes scrutiny.

Platforms must appoint a senior executive responsible for compliance. This person will be answerable if the platform fails to meet safety standards—introducing rare personal liability in the digital space.

Social media platforms will be required to improve their moderation systems to detect and remove illegal content, such as child exploitation materials or terrorist propaganda. Reporting and complaint tools must be made easy to find, use, and act upon.

Children’s profiles, by default, will need to be private. Non-connected users should not be able to contact them, and sensitive information like location or connections must remain hidden. Algorithm testing must ensure harmful content—like self-harm promotion, pornography, or abuse—is not recommended to minors.

Platforms at high risk of hosting CSAM must deploy hash-matching technology—automated tools that compare images and videos to known databases of harmful content—alongside URL detection to prevent distribution. Tech firms must create dedicated fraud-reporting channels for trusted organizations, enabling rapid takedowns of known scams.

Accounts operated by proscribed terrorist organizations must be identified and removed promptly.
Failure to comply comes with harsh penalties: fines of up to £18 million or 10% of a company’s global annual revenue—whichever is greater. For context, a company like Meta, with a 2023 revenue of $117 billion, could face fines in the range of $11 billion. In severe cases, courts may block access to non-compliant platforms entirely.

Blue login screen | Pixabay

Why this matters: The human impact of online harm

Behind the legal jargon and heavy penalties lies a profound truth: Online harms are not just digital problems—they are real-life issues with devastating consequences.

According to Ofcom, the priority offences covered under the Act include some of the most severe societal challenges: child sexual exploitation, intimate image abuse (including revenge porn), coercive behavior, fraud, hate speech, terrorism, and assisting suicide.

For children, these harms are particularly insidious. Research commissioned by Ofcom revealed that many teenagers feel desensitized to sexualized messages and interactions with strangers online. One participant described this disturbing normalization: “It’s just part of being online. You don’t think about it anymore.”

Women and girls also face disproportionate abuse. Online harassment, stalking, and the non-consensual sharing of intimate images have become rampant. The new codes aim to empower victims by enabling stronger reporting mechanisms and faster content takedowns.

Despite the urgency to address these harms, critics argue that heavy regulation could stifle free expression and burden smaller platforms unfairly. Free speech advocates worry about over-censorship, while tech firms have raised concerns over the feasibility of implementing measures at such scale.

However, Peter Kyle, Britain’s Technology Secretary, pushed back on these criticisms: “If platforms fail to step up, the regulator has my backing to use its full powers, including issuing fines and asking the courts to block access to sites.”

Ofcom has stressed its “evidence-based” approach, assuring companies that flexibility exists as long as effective safety measures are in place.

Looking ahead: More changes in 2025

While the illegal harms codes are just the first step, 2025 promises an even more transformative year for online regulation.

January: Final rules on age assurance to block children’s access to pornography.

April: Additional protections for children against self-harm, eating disorder promotion, and cyberbullying.

Spring: Consultations on AI use to tackle harms, crisis protocols, and blocking accounts of repeat offenders.

Britain’s new regime reflects a broader international trend of holding tech companies accountable for user safety. The EU’s Digital Services Act already requires large platforms to monitor and remove illegal content, while countries like Australia and Canada are exploring similar legislation.

Britain’s willingness to introduce executive liability, however, sets it apart and raises the stakes for tech leaders worldwide.

The coming year will test whether tech companies are ready to prioritize safety alongside profits. Platforms with engagement-driven models, where algorithms optimize content for clicks and views, may need to make fundamental changes to avoid breaching the law.

Speaking on the BBC’s Today program, Dame Melanie Dawes was resolute: “Tech companies must test their algorithms to ensure illegal content doesn’t slip through. If it does, they must take it down immediately.”

This is a defining moment for the digital world. If platforms meet their obligations, they could pave the way for a safer, healthier internet. If not, Britain stands ready to wield its regulatory hammer.

Tech firms now have just three months to assess risks and prepare. The clock is ticking.

 

Fabrice Iranzi

Journalist and Project Leader at LionHerald, strong passion in tech and new ideas, serving Digital Company Builders in UK and beyond
E-mail: iranzi@lionherald.com

Leave a Reply

Your email address will not be published.