The Online Safety Act officially came into force on July 25, 2025, placing new legal duties on digital platforms to shield users, particularly children, from harmful and illegal online material. The legislation empowers Ofcom, the UK's communications regulator, to enforce these rules with the potential for significant penalties, including criminal liability for persistent failures by tech companies.

The Act marks a significant shift, compelling platforms like social media services, search engines, and even private messaging apps to actively moderate content. This includes a specific focus on stopping algorithms from circulating harmful material, such as content related to suicide, self-harm, and eating disorders, to minors. Adult websites are also subjected to tougher age verification measures.

"The platforms are now legally required to stop toxic algorithms from feeding harmful content like suicide, self-harm, or eating disorder material to children."
The implementation is a phased approach. While rules requiring platforms to remove illegal content began earlier, the current phase specifically addresses the protection of children from viewing harmful content. This involves platforms taking steps to assess and mitigate risks associated with user-generated content.
Read More: LLM Performance Plateau Means Less Big Jumps, More Small Helps

Enforcement and Accountability
Ofcom is tasked with the oversight and enforcement of the Act. Their role involves providing guidance on compliance and investigating potential breaches. The legislation grants them expanded powers, enabling them to demand age verification tools on certain sites and requiring platforms to promptly remove illegal content, encompassing categories like child sexual exploitation and terrorist material.

"Ofcom’s online safety enforcement Guide for services: complying with the Online Safety Act."
A key aspect of the Act is the increased accountability placed upon tech companies. They are now legally obligated to protect their users, and repeated failures to do so could lead to criminal charges. This moves beyond mere content moderation to a broader responsibility for the digital environment they provide.
Underlying Concerns and Parental Hopes
The introduction of the Act comes with an acknowledgment from government officials of past shortcomings in protecting children online. The Tech Secretary, Peter Kyle, offered an apology to parents for delays in implementing such protections.
Read More: Anthropic Sues Pentagon Over "Supply Chain Risk" AI Ban in California
"I am sorry, you have been let down for too long."
This sentiment reflects a long-standing parental anxiety about children's exposure to the internet's darker aspects, areas where parents often feel disempowered. While discussions have included potential measures like "app caps" and content curfews, the Act's focus appears to be on platform-level responsibilities.
Broader Implications and Ongoing Scrutiny
The scope of the Online Safety Act extends to user-to-user platforms, search engines, and private messaging services, raising questions among privacy advocates about its impact on encrypted communications. The long-term implications for digital rights and the evolving landscape of internet regulation are expected to be substantial.
The Act necessitates platforms to conduct risk assessments concerning illegal content and, distinctly, children's exposure to harm. This requires a granular approach, assessing risks associated with 17 specific categories of priority illegal content. The continuous management and monitoring of these risks are integral to compliance.
Read More: US Customs Cannot Refund $166 Billion in Tariffs Quickly Due to System Limits