India Makes New Rules for AI Content Online

The Indian government has made new rules for AI-generated content online. Websites and apps must now clearly label AI-made material. They also have to remove harmful content very quickly, in just three hours. These rules help stop fake or harmful AI content from spreading.

New Regulations Mandate Rapid Takedowns and Clear Labeling of AI-Generated Material

The Indian government has significantly tightened its regulations concerning artificial intelligence (AI)-generated and synthetic content on online platforms. Under these new rules, social media companies and other internet intermediaries must now remove flagged unlawful content within a mere three hours, a drastic reduction from the previous 36-hour window. Furthermore, platforms are now required to clearly label all AI-generated or modified content. These changes aim to address growing concerns about the misuse of AI for creating deceptive material, including deepfakes and non-consensual imagery. The rules, which come into effect on February 20, are expected to increase compliance demands on global platforms operating in India.

Centre tightens AI content rules, slashes takedown window to 3 hours - 1

Context of the Amendments

The recent amendments to India's Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, introduce a more rigorous framework for managing AI-generated content.

Read More: Amazon Prime Was Faster and Saved People Money in 2025

Centre tightens AI content rules, slashes takedown window to 3 hours - 2
  • Timeline of Changes: The amendments were notified on Tuesday, February 10, 2026, and are set to take effect on February 20, 2026.

  • Key Actors: The Ministry of Electronics and Information Technology is the governing body implementing these changes. Major online platforms such as Meta (Facebook, Instagram), YouTube, X (formerly Twitter), and others are directly impacted.

  • Core Event: The government has mandated several new compliance requirements for platforms that host or enable the creation of AI-generated content.

Evidence of New Regulations

Multiple reports confirm the core elements of the tightened regulations:

Centre tightens AI content rules, slashes takedown window to 3 hours - 3
  • Reduced Takedown Window: The most prominent change is the reduction of the takedown window for flagged unlawful AI-generated and synthetic content from 36 hours to three hours. In some specific cases, such as non-consensual intimate imagery and deepfakes, the window is reported to be as short as two hours.

  • Mandatory Labeling: Platforms must ensure that AI-generated or modified content is clearly and prominently labelled. This can include visible disclosures or embedded metadata.

  • User Declaration: Significant social media intermediaries are required to ask users to declare whether the content they upload is AI-generated before publication. Platforms must also deploy automated tools to verify these declarations.

  • Prohibition on Label Removal: Once applied, AI labels or metadata cannot be removed or suppressed by the platforms.

  • Definition of Synthetically Generated Information (SGI): The amendments introduce or clarify the definition of SGI to include audio-visual content that is artificially or algorithmically created, generated, modified, or altered using computer resources in a way that appears real or authentic and is likely to be perceived as indistinguishable from a natural person or real-world event.

  • Treatment of AI Content: AI-generated content used for unlawful activities will be treated on par with other illegal content.

  • Automated Tools: Platforms are directed to deploy automated systems to detect and curb illegal AI content.

The Shift in Regulatory Speed

The government's decision to slash the takedown window for flagged content from 36 hours to three hours represents a marked acceleration in the pace of regulatory enforcement. This swift action underscores the urgency perceived by authorities in combating the spread of potentially harmful AI-generated material. The introduction of mandatory labeling and user declaration requirements signals a move towards greater transparency and user accountability regarding the origin of digital content.

Read More: Udit Narayan's First Wife Accuses Him and Family of Removing Her Uterus Without Consent

Centre tightens AI content rules, slashes takedown window to 3 hours - 4

Focus on Harmful and Deceptive Content

The new rules specifically target content that is deemed illegal, deceptive, sexually exploitative, non-consensual, or linked to false documents, child abuse material, explosives, or impersonation. The intention appears to be to create a robust system for identifying and removing content that poses a risk to individuals and public order. The inclusion of deepfakes and sexually exploitative material in the expedited takedown categories highlights the government's priority in addressing these specific concerns.

Impact on Platform Compliance

The amended rules impose significant new obligations on online intermediaries. The requirement to implement rapid takedown procedures, develop and deploy automated detection systems, and manage user declarations for AI-generated content will likely necessitate substantial investment in technology and operational adjustments. The stricter timelines could also lead to increased legal exposure for platforms that fail to comply, particularly in cases where flagged content remains online beyond the prescribed three-hour window.

Expert Analysis

"The government's move reflects a global trend towards regulating AI more stringently. The accelerated takedown timelines and mandatory labeling aim to strike a balance between fostering technological innovation and mitigating potential risks associated with generative AI."

"This is a substantial change. The three-hour window is exceptionally tight and will require platforms to have highly efficient content moderation systems in place. The emphasis on labeling also signals a desire for users to be aware of the nature of the content they are consuming."

Conclusion and Next Steps

The Indian government's latest amendments to its IT Rules represent a decisive step towards regulating AI-generated content. The core changesβ€”a drastically shortened takedown period for flagged content and mandatory labeling of AI-generated materialβ€”aim to increase platform accountability and user awareness. The rules also seek to proactively prevent the spread of harmful AI-generated content through the deployment of automated tools and user declarations.

Read More: Why Some Programmers Choose Special Languages

The effectiveness of these regulations will hinge on:

  • The ability of platforms to implement the required technological and procedural changes within the stipulated timeframe.

  • The clarity and consistency with which the rules are interpreted and enforced.

  • The ongoing dialogue between the government and technology companies regarding compliance challenges and best practices.

As these rules come into force on February 20, 2026, continuous monitoring and evaluation will be crucial to assess their impact on the online ecosystem in India and their role in shaping the future of AI governance.

Sources and Context

Read More: Puducherry Leaders Say Elections Are Less Meaningful Without More Power

Frequently Asked Questions

Q: What are the new rules for AI content in India?
Online platforms must label AI-generated content and remove illegal content quickly.
Q: How fast must platforms remove bad AI content?
They must remove it within three hours after it is reported.
Q: Why are these rules being made?
The rules are to help stop fake or harmful AI content, like deepfakes, from being shared online.
Q: Who needs to follow these rules?
Social media sites and other online platforms that host content must follow these rules.
Q: When do these rules start?
The new rules start on February 20, 2026.