india employmentnews

AI becomes a sycophant: No longer true, AI chatbots are giving answers that please users; research reveals..

 | 
Social media

Until now, people believed that if an accurate and unbiased answer was needed, only Artificial Intelligence (AI) could provide it. But new research has proven this belief wrong. A joint study by Stanford University and Carnegie Mellon University revealed that AI has begun to flatter humans and now responds only to what humans like.

Attempting to Win Over Users

The research calls this behavior "Social Psychophancy," meaning when AI, instead of pointing out the user's mistakes, starts justifying their every thought or action. This means that AI is now focusing more on saying things that appeal to humans rather than telling the truth.

A study was conducted on several AI models

The study included 11 large language models (LLMs) such as OpenAI, Anthropic, Google, Meta, and Mistral. All of these were found to respond to confusing or controversial questions with answers that resonated with the user, rather than providing realistic and unbiased answers.

AI Influencing Human Behavior

Researchers found that this tendency also impacts human behavior. When many users consult AI regarding ethical or personal decisions, the AI ​​tends to justify them. This results in people refusing to admit their mistakes and believing they are always right. For this study, user questions and answers on platforms like Reddit were also observed. Interestingly, when the online community blamed a user, the AI ​​defended that same user.

Questions about AI's Trustworthiness

In two separate experiments, 1,604 participants were tested to see how flattering answers affect their confidence and decisions. Those who were affirmed by the AI ​​were more likely to admit their mistakes and trust the AI ​​more. Experts say this trend could be dangerous, as it creates a cycle of convenient lies for both users and developers, raising questions about the trustworthiness of AI.

Disclaimer: This content has been sourced and edited from Amar Ujala. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.