Britain’s landmark Online Safety Act officially entered its enforcement phase on Monday 16 December, marking a watershed moment for internet safety and signaling a firm crackdown on illegal online activity.
This move places the onus squarely on social media giants like Meta’s Facebook, YouTube, TikTok, and other digital platforms to make their spaces “safer by design,” particularly for children and vulnerable users.
According to commentators, the message from the UK government and its regulator, Ofcom, is becoming clearer: The freewheeling days of unchecked harmful content online are over.
Under Ofcom’s first set of published guidelines, tech platforms now face an obligation to assess and mitigate the risks posed by illegal content—such as terrorism, child sexual abuse material (CSAM), and online fraud.
They have until March 16, 2025, to evaluate their risks and outline actionable plans to tackle such harms. After that deadline, implementation begins, with sweeping safety measures expected across platforms.
In recent years, mounting public pressure and troubling events—such as the riots earlier this year, believed to have been fueled by social media—pushed the government to accelerate online safety regulations. Additionally, rising concerns about children’s exposure to harmful content, cyberbullying, and sexual predators have reached a boiling point.
Ofcom CEO Dame Melanie Dawes summarized the stakes plainly: “For too long, sites and apps have been unregulated, unaccountable, and unwilling to prioritize people’s safety over profits. That changes from today.”
The Online Safety Act, signed into law in October 2023, represents one of the world’s most ambitious attempts to regulate digital platforms.
It mirrors similar regulatory efforts in the European Union (with the Digital Services Act) and Australia, but Britain’s law uniquely introduces significant criminal liability for senior executives in extreme cases of non-compliance.
What’s required of tech firms?
The Online Safety Act sets out a detailed framework of over 40 safety measures for platforms, which vary based on a company’s size, risk profile, and user base. While smaller services will have lighter requirements, no one escapes scrutiny.
Platforms must appoint a senior executive responsible for compliance. This person will be answerable if the platform fails to meet safety standards—introducing rare personal liability in the digital space.
Social media platforms will be required to improve their moderation systems to detect and remove illegal content, such as child exploitation materials or terrorist propaganda. Reporting and complaint tools must be made easy to find, use, and act upon.
Children’s profiles, by default, will need to be private. Non-connected users should not be able to contact them, and sensitive information like location or connections must remain hidden. Algorithm testing must ensure harmful content—like self-harm promotion, pornography, or abuse—is not recommended to minors.
Platforms at high risk of hosting CSAM must deploy hash-matching technology—automated tools that compare images and videos to known databases of harmful content—alongside URL detection to prevent distribution. Tech firms must create dedicated fraud-reporting channels for trusted organizations, enabling rapid takedowns of known scams.
Accounts operated by proscribed terrorist organizations must be identified and removed promptly.
Failure to comply comes with harsh penalties: fines of up to £18 million or 10% of a company’s global annual revenue—whichever is greater. For context, a company like Meta, with a 2023 revenue of $117 billion, could face fines in the range of $11 billion. In severe cases, courts may block access to non-compliant platforms entirely.
