india employmentnews

ChatGPT Data Breach: ChatGPT data leaked on Google search results, learn how security was compromised..

 | 
Social media

If you've recently searched online using ChatGPT's browsing mode, beware. Your private chats may have been leaked in Google search results. This issue surfaced earlier this month. This AI system leak raises questions about how secure web browsers are in handling user data. This is a significant issue because browsers are central to our online lives, handling sensitive information like login details, financial data, and personal browsing history.

How did the chat leak online become apparent?
Developers first noticed this leak when they were viewing their Google Search Console dashboards. According to TechCrunch, instead of simple keywords, they saw complete sentences that people had typed into ChatGPT. These sentences were quite detailed and conversational, making it clear that some ChatGPT chats were appearing in Google search results.

Researcher Jason Packer and consultant Slobodan Manic investigated the matter. Their report revealed that this data leak was caused by ChatGPT's browsing mode. In some cases, chats were being added to URLs with a tag called "hints=search." Google automatically scans and indexes such URLs in its system, which is why these private chats were accidentally appearing on some websites.

What did OpenAI say?
According to a report by Ars Technica, OpenAI confirmed the issue and stated that the error was limited to a few searches. The company said the issue has now been fixed, but did not disclose how long the issue lasted or how many users were affected. Fortunately, no passwords or personal information were leaked. Nevertheless, this incident shows how closely AI tools are connected to the internet's systems. This isn't the first time questions have been raised about ChatGPT's data security. Earlier this year, users noticed that shared chat links were appearing on Google, likely due to the public sharing settings at the time. But this time, the glitch wasn't the user's fault, but rather a technical issue within the system, which is a greater concern from a privacy perspective.

How can users protect their privacy?

Even though OpenAI has fixed this data leak, you can further protect your privacy by taking a few simple steps, especially when using AI tools with internet access (web browsing).

Never enter personal or sensitive information in your chats.

Use private or incognito mode when using the web access feature of AI tools.

Turn on browsing mode only when necessary.
Regularly delete your chat history to prevent your data from being saved or exposed.

As AI tools become increasingly integrated into internet systems, even a small technical error can lead to a data leak. Therefore, always be cautious and aware when using web-connected AI assistants.

Disclaimer: This content has been sourced and edited from Amar Ujala. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.