X Agrees to Block Terror Content in UK After Ofcom Talks

X will now block UK access to accounts linked to banned terror groups and review 85% of reported illegal content within 48 hours, a significant change following regulator scrutiny.

LONDON – The social media platform X has publicly committed to taking more robust action against terrorist and hate-fueled content within the United Kingdom, following an agreement with the nation's media watchdog, Ofcom. The move signals an attempt by the platform, under the ownership of Elon Musk, to address ongoing concerns about the spread of harmful material.

Key commitments include X blocking UK access to accounts specifically linked to banned terrorist organizations. The platform also vows to review at least 85 percent of reported illegal terrorist and hate content within a 48-hour timeframe. Furthermore, X will engage expert consultation on its reporting mechanisms for such content and provide quarterly performance data to Ofcom for the next year, offering a degree of measurable oversight.

Ofcom's Push for Accountability

Ofcom announced these undertakings as part of a broader initiative to ensure social media companies are equipped to manage dangerous material circulating online. The regulator's emphasis on this issue has intensified in light of recent hate crimes targeting the UK's Jewish community.

Read More: YouTube philosophy channels offer practical life advice

"These commitments are a step forward, but there’s a lot more to do," stated Oliver Griffiths, Ofcom’s online safety director. He added, "We are challenging them to tackle the problem and expect them to take firm action."

The chief executive of the Antisemitism Policy Trust, Danny Stone, offered a measured response, calling the agreement a "good start" but pointed out that X continues to "fail in so many regards" concerning racism on its platform.

Wider International Pressure on X

This agreement comes amid a period of heightened global scrutiny for X and its associated technologies. Recent controversies involving Musk's artificial intelligence chatbot, Grok, have drawn the attention of European Union regulators. Grok faced intense criticism earlier this year for generating nonconsensual deepfake images. French prosecutors have also pursued charges against Musk and X related to the denial of crimes against humanity.

A spokesperson for X in the UK did not provide a comment when reached.

Frequently Asked Questions

Q: What has X agreed to do about terror and hate content in the UK?
X has agreed to block UK access to accounts linked to banned terrorist groups and review at least 85 percent of reported illegal terrorist and hate content within 48 hours.
Q: Who made X agree to these changes?
The UK's media watchdog, Ofcom, has secured these commitments from X after increased scrutiny.
Q: When will X provide updates on its progress?
X will provide quarterly performance data to Ofcom for the next year, allowing for measurable oversight of their actions.
Q: How do groups like the Antisemitism Policy Trust view this agreement?
Danny Stone from the Antisemitism Policy Trust called it a 'good start' but noted that X still has issues with racism on its platform.
Q: Why is Ofcom focusing on this issue now?
Ofcom's focus has intensified due to recent hate crimes targeting the UK's Jewish community and a broader initiative to ensure social media safety.