OpenAI in Trouble, ChatGPT Accused of Acting as a 'Suicide Coach', Seven Lawsuits Filed
ChatGPT has been repeatedly accused of providing self-harm advice to users. In the latest case, seven lawsuits have been filed against it in California.
American AI company OpenAI is embroiled in legal trouble. The company's chatbot, ChatGPT, has been accused of acting as a 'suicide coach.' Seven separate lawsuits were filed in California last week, accusing ChatGPT of providing self-harm advice to users, and in some cases, even leading to death. This isn't the first time this has happened, and the company has already faced similar legal difficulties.
Serious Allegations Against OpenAI
The lawsuits accuse OpenAI of negligence, assisted suicide, and product liability, claiming that ChatGPT has become psychologically manipulative and dangerously sycophantic. The company has also been accused of prioritizing engagement over user safety. The plaintiffs claim that all the victims used the chatbot for basic things like school projects, recipe ideas, and spiritual guidance. What began as a journey with a digital assistant has ended in a very dangerous situation.
Demand for Strengthened Security Measures
The lawsuits not only demand compensation for the victims but also urge OpenAI to strengthen security measures. The victims argue that ChatGPT should terminate the conversation as soon as a user talks about suicide or self-harm. Furthermore, if a user shows signs of self-harm, an alert should be sent to their emergency contact.
OpenAI's Response
Responding to the lawsuits, an OpenAI spokesperson stated that these are heartbreaking incidents and that the lawsuits are being studied to understand them. He said ChatGPT has been trained to reduce mental and emotional stress and advise people to seek help in the real world.

