Social media platform X has agreed to strengthen protections against illegal hate speech and terrorist content in the UK. The agreement follows months of pressure from the UK regulator Ofcom.
Under the new content moderation commitments X will review suspected illegal content within 24 hours on average and assess at least 85% of reports within 48 hours.
The platform has agreed to restrict access in the UK to accounts linked to banned terrorist organisations. X will share quarterly performance data with Ofcom for one year and work with external experts to improve reporting systems.
Ofcom says illegal hate speech and terrorist content still persist on major platforms. The regulator’s investigation into X is ongoing, includes, its systems for tackling illegal content and issues related to its AI chatbot Grok.
It should be noted that X is also under pressure from regulators in the European Union, Australia and Singapore. The European Commission has opened a formal probe into X’s handling of hate speech.
The Increased focus on X follows recent antisemitic attacks in the UK, including a stabbing incident in north London treated as terrorism. Advocacy groups claim that action was driven by sustained campaigning after previous attacks (e.g., Heaton Park Synagogue). According to them X is still falling short in tackling racism.
In February, reports said Grok generated sexualised images without consent warnings being respected.
