How did these apps that undress people end up on Apple and Google's stores? These harmful apps pose a major threat to women's safety..
A shocking report has emerged from the Tech Transparency Project (TTP). According to the report, dozens of AI-powered apps that can digitally undress people are available on the Google Play Store and Apple App Store. These are technically referred to as "Nudify" apps. These apps use AI to transform ordinary photos into explicit images. The presence of dozens of such apps on both the Play Store and App Store raises serious questions about women's safety. Furthermore, it's being asked how these apps, which openly violate the security policies of both companies, managed to get onto these stores and be downloaded millions of times.
Millions of Downloads and Billions in Earnings
According to a report by Mashable, TTP's investigation found 55 such apps on the Google Play Store and 47 on the Apple App Store. Data analytics firm AppMagic reports that these apps have been downloaded more than 705 million times worldwide. As a result, these apps have generated approximately $117 million, or ₹970 crore, in revenue. Both Apple and Google are equally responsible for this situation, as both companies receive a commission from the earnings of these apps. Tech giants like Apple and Google are also profiting from the illicit earnings of these harmful apps.
According to the report, after TTP's report came to light, Apple told CNBC that it had removed 28 such apps. Google said it had suspended several apps and that the investigation is ongoing. However, TTP clearly states that this action is completely insufficient. TTP claims in its report that these two tech giants make big claims about people's safety, but they have failed to prevent the misuse of AI deepfakes. Apps that can transform any woman's ordinary picture into an explicit image are readily available on their app stores. This is a terrifying reality for women's safety.
Grok has become a subject of controversy.
Recently, Elon Musk's company xAI's AI chatbot Grok was also at the center of a global controversy for generating deepfake pornographic images. People were able to create explicit images of individuals simply by typing text prompts into the AI. Shockingly, there were no pre-existing security measures to prevent Grok from doing this. Grok was eventually brought under control after receiving ultimatums from various governments around the world, including India. According to reports, more than 3 million sexually explicit images were generated by Grok in just 11 days. Experts say that governments worldwide need to enact stricter laws against non-consensual deepfakes, i.e., pornographic images created without the consent of the individuals depicted.
Disclaimer: This content has been sourced and edited from Navbharat Times. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.

