UK Online Safety Act July 25 2025 Makes Platforms Remove Harmful Content

The Online Safety Act is now law, meaning tech companies must protect children from harmful content online. This is a big change from before.

The Online Safety Act officially came into force on July 25, 2025, placing new legal duties on digital platforms to shield users, particularly children, from harmful and illegal online material. The legislation empowers Ofcom, the UK's communications regulator, to enforce these rules with the potential for significant penalties, including criminal liability for persistent failures by tech companies.

New online safety rules come into effect today - 1

The Act marks a significant shift, compelling platforms like social media services, search engines, and even private messaging apps to actively moderate content. This includes a specific focus on stopping algorithms from circulating harmful material, such as content related to suicide, self-harm, and eating disorders, to minors. Adult websites are also subjected to tougher age verification measures.

New online safety rules come into effect today - 2

"The platforms are now legally required to stop toxic algorithms from feeding harmful content like suicide, self-harm, or eating disorder material to children."

The implementation is a phased approach. While rules requiring platforms to remove illegal content began earlier, the current phase specifically addresses the protection of children from viewing harmful content. This involves platforms taking steps to assess and mitigate risks associated with user-generated content.

Read More: LLM Performance Plateau Means Less Big Jumps, More Small Helps

New online safety rules come into effect today - 3

Enforcement and Accountability

Ofcom is tasked with the oversight and enforcement of the Act. Their role involves providing guidance on compliance and investigating potential breaches. The legislation grants them expanded powers, enabling them to demand age verification tools on certain sites and requiring platforms to promptly remove illegal content, encompassing categories like child sexual exploitation and terrorist material.

New online safety rules come into effect today - 4

"Ofcom’s online safety enforcement Guide for services: complying with the Online Safety Act."

A key aspect of the Act is the increased accountability placed upon tech companies. They are now legally obligated to protect their users, and repeated failures to do so could lead to criminal charges. This moves beyond mere content moderation to a broader responsibility for the digital environment they provide.

Underlying Concerns and Parental Hopes

The introduction of the Act comes with an acknowledgment from government officials of past shortcomings in protecting children online. The Tech Secretary, Peter Kyle, offered an apology to parents for delays in implementing such protections.

Read More: Anthropic Sues Pentagon Over "Supply Chain Risk" AI Ban in California

"I am sorry, you have been let down for too long."

This sentiment reflects a long-standing parental anxiety about children's exposure to the internet's darker aspects, areas where parents often feel disempowered. While discussions have included potential measures like "app caps" and content curfews, the Act's focus appears to be on platform-level responsibilities.

Broader Implications and Ongoing Scrutiny

The scope of the Online Safety Act extends to user-to-user platforms, search engines, and private messaging services, raising questions among privacy advocates about its impact on encrypted communications. The long-term implications for digital rights and the evolving landscape of internet regulation are expected to be substantial.

The Act necessitates platforms to conduct risk assessments concerning illegal content and, distinctly, children's exposure to harm. This requires a granular approach, assessing risks associated with 17 specific categories of priority illegal content. The continuous management and monitoring of these risks are integral to compliance.

Read More: US Customs Cannot Refund $166 Billion in Tariffs Quickly Due to System Limits

Frequently Asked Questions

Q: What is the Online Safety Act and when did it start?
The Online Safety Act is a new law in the UK that started on July 25, 2025. It makes digital companies responsible for stopping harmful and illegal content online, especially for children.
Q: What kind of harmful content must platforms remove under the new UK law?
Platforms must remove illegal content like child abuse and terrorism material. They also need to stop algorithms from sharing harmful content about suicide, self-harm, and eating disorders with children.
Q: Who is in charge of enforcing the Online Safety Act in the UK?
Ofcom, the UK's communications regulator, is responsible for enforcing the Online Safety Act. They can investigate companies and give big fines if rules are not followed.
Q: What happens to tech companies if they don't follow the Online Safety Act?
If tech companies repeatedly fail to follow the Online Safety Act, they could face serious penalties. This includes large fines and even criminal charges for top people in the company.
Q: Why was the Online Safety Act introduced in the UK?
The Act was introduced because of worries about children being exposed to harmful and illegal content online. The government apologized for delays in putting these protections in place.