india employmentnews

Meta: Meta has banned chatbots for teenagers, now there will be no discussion on suicide and sensitive topics..

 | 
Social media

Meta has implemented new safety guidelines on its artificial intelligence (AI) chatbots. The company has announced that its chatbots will no longer interact with teenage users (aged 13 to 18) on sensitive topics such as suicide, self-harm or eating disorders. In this situation, teenagers will be sent directly to professional helplines and expert resources.

This decision has been taken at a time when a US senator started an investigation into Meta two weeks ago. The investigation began on the basis of a leaked internal document, which claimed that Meta's AI chatbots could have "sensitive and engaging" conversations with teenagers. However, Meta rejected these allegations, saying that the company's policies strictly prohibit any content that sexually depicts minors.

Chatbots will be cautious with teenagers.

A Meta spokesperson said, "We have added safety features to AI products for teenagers from the beginning. Chatbots are designed to give safe responses, especially to questions related to suicide, self-harm, and unhealthy eating." He also said that now more security measures are being implemented as an extra precaution, and the number of chatbots available for teenagers will be limited for now.

Questions raised on the regulation of Meta AI

Although this move has been welcomed, critics say that this security should have been implemented from the beginning. Andy Burrows, head of the Molly Rose Foundation, called it "shocking" and said that the security of any technology should be checked before the product launch, not after the dangers are exposed.

Meta says that updates are being made to its system. Currently, users aged 13 to 18 on Facebook, Instagram, and Messenger are automatically placed in teen accounts, in which strict privacy and content settings are applicable. The company also announced in April that parents would soon be able to see which chatbots their child has interacted with in the past week.

OpenAI incited suicide

Concerns are growing over the impact of AI chatbots. Recently, a California couple sued ChatGPT's parent company, OpenAI. It was alleged that their son committed suicide after the advice of the chatbot. After this incident, OpenAI also made changes to its chatbots to ensure a safe and healthy experience for users.

A Reuters report also revealed that inappropriate chatbots were being created by misusing Meta's AI tools. Some of these bots were imitating female celebrities such as Taylor Swift and Scarlett Johansson and making sexual comments by calling themselves real stars. Not only this, with the help of these tools, photo-realistic pictures of young celebrities were also created, one of which showed a minor star without clothes.

Meta's new policy

Meta clarified that creating images of public figures is allowed, but nude or pornographic pictures will not be tolerated at all. The company has removed many such chatbots and said that imitating any public figure is against the rules of its AI studio.

Amidst increasing pressure from regulatory agencies and security groups, Meta now faces a big challenge to keep its AI products safe while also keeping them innovative.

Disclaimer: This content has been sourced and edited from Amar Ujala. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.