india employmentnews

Explainer: Now create content with AI with caution, the Centre has implemented new rules; learn important things..

 | 
Social media

The government has amended India's IT intermediary rules, bringing AI-generated content—deepfake videos, synthetic audio, and altered visuals—under a formal regulatory framework for the first time. Notified through gazette notification G.S.R. 120(E) and signed by Joint Secretary Ajit Kumar, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, will come into effect on February 20. So, let's explore the key points about these rules.

What's at the heart of the rule?
The premise of this rule is simple. Platforms will be required to label synthetically generated information (SGI) clearly enough for users to recognize it immediately. They will also be required to embed persistent metadata and unique identifiers so that content can be traced back to its origin. And once these labels are applied, they cannot be altered, suppressed, or removed.

What does the government consider AI-generated content?
For the first time, the central law now includes a formal definition for "synthetically generated information." This includes any audio, visual, or audio-visual content created or altered using computer resources that appears realistic and depicts people or events in a way that appears real.

Exemptions will be granted.
But not everything with filters qualifies. Routine editing—color correction, noise reduction, compression, and translation—is exempt, as long as it doesn't distort the original meaning. Research papers, training materials, PDFs, presentations, and hypothetical drafts created using illustrative content will also be accepted.

Maintenance of Content Made with AI
The major burden of this falls on major social media platforms—Instagram, YouTube, and Facebook—to be required. Under the new Rule 4(1A), before a user uploads content, the platform must ask: Is this content made with AI? But this doesn't end with self-declaration. Platforms will also be required to use automated tools for cross-verification, ensuring the format, source, and nature of content before it goes live.

If flagged as synthetic, the content must have a visible disclosure tag. If a platform knowingly posts infringing content, it is deemed to have failed its due diligence.

The government also quietly removed a proposal from its October 2025 draft. That version required watermarks on AI visuals to occupy at least 10% of the screen space. IAMAI and its members—such as Google, Meta, and Amazon—opposed this, arguing that it was too strict and difficult to implement across all formats. The rules now require labeling but remove the fixed-size watermark.

Response windows have been reduced, from 36 to just three hours. Platforms will now have three hours, down from 36, to act on certain legal orders. The 15-day window is now seven days. The 24-hour deadline has been halved to 12.

This content is now criminalized.
The rules also draw a clear line between synthetic content and criminal law. Child sexual abuse material, obscene content, false electronic records, explosive-related material, or deepfakes that falsely impersonate the identity or voice of a real person now fall under the Indian Penal Code, the POCSO Act, and the Explosive Substances Act.

Warnings Required
Platforms will also be required to warn users at least once every three months, in English or an Eighth Schedule language, about the penalties for misusing AI content. Furthermore, the government has informed intermediaries that taking action against synthetic content under these rules will not provide them with safe harbor protection under Section 79 of the IT Act.

Disclaimer: This content has been sourced and edited from Dainik Jagran. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.