UK Rules Now Cover AI Chatbots to Keep People Safe Online

New rules in the UK mean that AI tools like chatbots must follow the Online Safety Act. This is to help protect people, especially children, from harmful content that AI might create or share.

Recent confirmations from Ofcom, the UK's communications regulator, indicate that the Online Safety Act (OSA) will apply to certain artificial intelligence (AI) tools, including generative AI and chatbots. This development signifies a significant expansion of the Act's scope beyond traditional online services, bringing AI technologies under regulatory scrutiny to address potential harms, particularly to children. The move aims to ensure that AI-generated content and interactions meet the safety standards established by the OSA, requiring service providers to assess and mitigate risks.

The Online Safety Act was designed to protect users, especially children, from illegal and harmful content. While its initial focus was on platforms sharing user-generated content, the evolving nature of online technologies has necessitated a broader interpretation of its provisions. Ofcom's guidance clarifies how existing duties within the Act will apply to AI, prompting service providers to review their compliance strategies.

Timeline and Regulatory Clarifications

Ofcom has been actively issuing guidance and clarifications regarding the application of the Online Safety Act to AI.

Read More: Changing Computer Parts Can Lock You Out of Encrypted SSDs

  • December 2025: Ofcom confirmed that duties under the Online Safety Act cover generative AI and chatbots.

  • March 2025: Ofcom published guidance detailing how the OSA applies to generative AI services.

  • Throughout 2025: Ofcom engaged in consultations regarding the Act's duties, particularly concerning the protection of children from AI-related harms.

Ofcom's clarification is a critical step in bringing AI technologies under the existing framework of the Online Safety Act, rather than requiring entirely new legislation.

UK to tighten online safety laws to include AI chatbots - 1

AI Technologies Within the Act's Scope

The application of the Online Safety Act to AI is nuanced, depending on the specific functionality and content generated by these tools.

Search Services and Generative AI

Generative AI tools that aggregate information from multiple websites and databases are being classified as "search services" under the Act. This means they are subject to the same regulatory obligations as traditional search engines.

Read More: Measles Cases Rise in UK, Making People Sick

  • The OSA defines search engines in a way that focuses on their function, leading to questions about whether AI services that use underlying search technology are considered providers of search services themselves.

User-to-User Services and AI-Generated Content

When AI tools are integrated into platforms that allow users to share content, or when AI itself generates content that can be shared, these services may fall under "user-to-user" service provisions.

  • Services with AI tools capable of generating pornographic material are specifically regulated under Part 5 of the OSA. These services must implement robust age assurance measures.

  • Platforms allowing users to create and share "user chatbots" are also considered user-to-user services.

AI tools that generate pornographic material are subject to the same duties as other services providing pornographic content, including the need for effective age verification.

Read More: Most People Still Use Windows Instead of Linux

UK to tighten online safety laws to include AI chatbots - 2

Exceptions and Exclusions

Not all AI chatbots will automatically be subject to the Online Safety Act. Certain AI chatbots may be exempt if they meet specific criteria.

  • Chatbots that only allow interaction with the chatbot itself and not with other users.

  • Chatbots that do not search multiple websites or databases for responses.

  • Chatbots that cannot generate pornographic content.

Protection of Children

A primary concern addressed by the Online Safety Act, and consequently its application to AI, is the protection of children from harmful content.

  • Ofcom has issued consultations on AI's role in age assurance and content harmful to children.

  • The Act requires services to assess and reduce the risk of harm, especially to children, from illegal and harmful content, including that related to self-harm, suicide, eating disorders, and dangerous online challenges.

  • Reports of AI chatbots contributing to harm, including allegations of contributing to the suicides of teenagers, underscore the urgency of these regulations.

Read More: UK to Host Big Meeting on Money Issues

The government is also planning public education campaigns to inform parents about online risks, including those posed by AI chatbots.

Emerging Concerns and Potential New Measures

Beyond the existing framework of the Online Safety Act, new legislative measures are being considered to address evolving AI risks.

  • VPN Ban Proposals: Plans are being discussed to ban VPN access for individuals under 18 to prevent circumvention of age-gated content and OSA provisions.

  • Criminalizing AI-Facilitated Crimes: The Data (Use and Access) Act (DUAA) introduces criminal liability for creating or requesting non-consensual intimate images using AI.

  • Broader AI Governance: Organizations are advised to map their AI usage, consider data protection laws (UK GDPR), and manage corporate governance risks associated with AI misuse.

The UK is actively exploring both the application of existing laws and the introduction of new measures to regulate AI, particularly concerning its impact on younger users and its potential to facilitate criminal activity.

Expert Analysis

"Ofcom has confirmed that generative AI tools, like chatbots, may fall within the scope of regulated services under the Online Safety Act. This means providers of such tools will have to comply with the Act's duties, which include taking measures to protect users, especially children, from illegal and harmful content."— Pinsent Masons

"The Online Safety Act imposes duties on in-scope services that have the functionality to permit users to share user-generated content, which can include images, videos, messages, or other information. It also said services that include gen-AI tools capable of generating pornographic material are regulated under the Act. These tools would be subject to the same duties as other search services."— Ofcom (via Pinsent Masons)

"Where chatbots fall under the Online Safety Act: Other AI generated content; Only allow people to interact with the chatbot itself and no other users; Do not search multiple websites or databases when giving responses to users; and Cannot generate pornographic content."— Ofcom (via Wired-Gov)

Conclusion

The UK's Online Safety Act is being interpreted and applied to generative AI and chatbots, reflecting a proactive approach to regulating emerging technologies. Ofcom's guidance clarifies that these AI tools can be classified as search services or user-to-user services, subjecting them to duties concerning illegal and harmful content, with a strong emphasis on child protection. Services that can generate pornographic material via AI face particularly stringent age assurance requirements. While some AI chatbots may be exempt based on their functionality, the overarching trend is towards bringing AI within the ambit of online safety regulations. Furthermore, potential legislative changes, such as restrictions on VPN use for minors and new criminal liabilities for AI-facilitated offenses, indicate a comprehensive strategy to manage AI-related risks.

Read More: Use a Small Computer to Watch Your Network

Sources:

Read More: UK Doctor Says Social Media Shows Fake Picture of Gender Change

Frequently Asked Questions

Q: What is the Online Safety Act?
It is a UK law made to make the internet safer for everyone, especially children. It helps stop bad and illegal content online.
Q: Will all AI chatbots follow these rules?
Most will, but some simple chatbots that only talk to you and don't create bad content might be left out.
Q: Why are AI chatbots being included?
Because AI can sometimes create harmful or illegal content, and the rules want to stop this from hurting people, particularly children.
Q: When do these rules start?
Ofcom, the UK's regulator, confirmed these rules will apply, with guidance given in March 2025 and confirmation in December 2025.