india employmentnews

Legal crackdown on deepfakes: New rules, AI labels, and 3-hour removal of inaccurate content to come into effect today..

 | 
Social media

Over the past few months, the internet has been flooded with attractive and realistic photos and videos. Previously, photos of women were being shared without consent on "X," which sparked considerable controversy. The availability of AI tools has made it incredibly easy to create and disseminate a variety of content. This has profoundly impacted creativity, quality, and authenticity.

Experts believe that algorithms favor hyperrealistic AI-generated content, as it has a high engagement rate. Since X, Instagram, and Facebook are user-engaging platforms, such content quickly goes viral. In recent times, individuals ranging from Italy's Prime Minister Giorgia Meloni to Pakistan's Azma Bukhari have fallen victim to deepfake videos. The rapid misuse of AI tools has sparked a new trend of harassing women and children in the digital space.

Consequently, the debate over accountability and credibility in the digital space has intensified. In light of this, the Central Government is implementing the IT Amendment Rules 2026 from today, February 20th. These include stringent provisions, including labeling AI content and removing deepfakes and fake news within three hours.

Why Monitoring All Content Is Necessary

When truth is attacked in the race to go viral, not just individuals but entire societies become victims. No one knows when and what triggers the spread of AI-generated synthetic content and rumors. Therefore, monitoring all content is essential, whether through technological tools or legal measures. The speed of content sharing on the internet media is so fast that no algorithm has yet fully developed the capability to censor content or verify the veracity of facts.

The distinction between human and AI content is blurring. In light of this, the World Economic Forum has identified fake news generated by synthetic content as one of the biggest threats facing the world. Currently, technical measures are used to identify fake content, verify its source, and monitor potential manipulation. The rapid pace at which synthetic content is approaching reality makes it increasingly difficult to identify fabricated media without technical support.

AI-generated cybercrime is not limited to corporate fraud; it is beginning to impact social life. A child sharing a deepfake video of a classmate in school, or someone being defrauded through voice cloning, is no longer a fantasy but a reality. Numerous studies show that humans cannot consistently identify AI-generated voices. Simply training people is not enough; instead, a system needs to be designed that can perform both prevention and monitoring functions.

Awareness is essential in the use of AI.
AI has made content creation much faster and easier than before. However, this has also led to increasing challenges of rumors, fraud, and public trust. Numerous cases of digital identity misuse and online security have already been reported in India. Cybersecurity reports indicate a 900 percent increase in deepfake cases worldwide between 2019 and 2024. India has over 450 million internet media users, making it a significant threat to AI content, impacting the broader digital community. To address this, labels should be mandatory on AI-generated photos, videos, and audio, so users can distinguish authentic content. Secondly, strict regulations and prompt action should be implemented against AI content used for scams, fake news, or fraud. Internet media companies need to be held accountable. Similar to email spam filters, there should be filters for deepfake videos. Ultimately, public awareness is paramount; people should develop the habit of verifying content before forwarding it.

Disclaimer: This content has been sourced and edited from Dainik Jagran. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.