india employmentnews

Government Tightens Rules on AI Content: Social Media Platforms Must Label AI Posts, Remove Deepfakes Within 3 Hours

 | 
S

The Government of India has issued strict new directives for social media platforms to curb the growing threat of AI-generated and deepfake content. Under the latest instructions from the Ministry of Electronics and Information Technology (MeitY), platforms such as Facebook, Instagram, YouTube, and X will now be required to clearly label all AI-generated or AI-modified content and remove flagged deepfake material within a maximum of three hours.

The move comes amid rising concerns over the misuse of artificial intelligence to spread misinformation, create fake videos and images, and manipulate public opinion. Officials believe the new rules will significantly improve transparency, platform accountability, and user awareness in the digital ecosystem.

Mandatory Labeling of AI-Generated Content

According to the government order, all AI-created or AI-altered content must carry a clear and visible label identifying it as such. In addition to visible labels, platforms will also be required to embed technical identifiers or markers that confirm the content’s AI origin.

Once applied, these labels cannot be removed, altered, or hidden in any manner. The objective is to ensure that users can easily distinguish between authentic content and material created or modified using artificial intelligence.

The updated rules are scheduled to come into effect from February 20, 2026. A draft of these regulations was first released on October 22, 2025, and feedback from stakeholders was reviewed before finalizing the guidelines.

Deepfake Content Must Be Removed Within 3 Hours

One of the most stringent aspects of the new framework is the strict timeline for removing deepfake content. If any AI-generated or manipulated video, image, or audio is flagged by the government or a court, social media platforms must take it down within three hours.

The government has clarified that this rapid response window is essential to prevent the viral spread of misleading or harmful content, especially deepfakes that can damage reputations, incite panic, or influence public discourse.

Failure to comply with the removal timeline could lead to regulatory action against the platforms.

Use of Automated Detection Tools Made Compulsory

The government has also mandated the deployment of automated tools capable of identifying illegal, misleading, or objectionable AI content. Platforms must actively monitor uploads and prevent the circulation of harmful material, particularly deepfakes related to sexual exploitation, harassment, or impersonation.

These tools are expected to play a key role in early detection and containment of AI-driven misinformation before it reaches a wider audience.

Regular User Warnings and Awareness Measures

To increase user awareness, social media companies have been instructed to issue regular warnings about the misuse of AI tools. Platforms must notify users at least once every three months about content policies, rule violations, and possible penalties associated with posting illegal or deceptive AI-generated material.

The government believes that consistent communication will help users better understand their responsibilities and discourage misuse of emerging technologies.

Amendments to IT Rules 2021 in Progress

These directives are part of the proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Under the proposed changes, users will also be required to disclose whether the content they upload has been created or modified using AI tools.

Social media platforms will be expected to verify such disclosures using technical solutions and enforce compliance through their content moderation systems.

A Step Toward Safer Digital Spaces

With AI tools becoming more accessible and sophisticated, the government has emphasized the need for timely regulation to prevent misuse. Officials say the new rules aim to strike a balance between innovation and safety, ensuring that technological advancements do not come at the cost of trust, privacy, or public order.

Once implemented, these measures are expected to make social media platforms more transparent and significantly reduce the impact of fake and misleading AI-generated content online.