Skip to content
Menu
Menu

Your ChatGPT conversation could have ended up in Google search

Many users appear to have unwittingly made their conversations with ChatGPT searchable on Google, exposing sensitive information and identifying details
  • OpenAI pulled the feature, describing it as a ‘short lived experiment’ and saying that it was working to scrub search results from relevant search engines

ARTICLE BY

PUBLISHED

ARTICLE BY

PUBLISHED

As people increasingly turn to AI tools to answer delicate questions or help with mental health struggles, OpenAI has reversed course on a “short-lived experiment” that saw thousands of private conversations with ChatGPT end up in Google site search results. 

Fast Company reports that the “share” feature – used by many to share conversations with friends and family – turned private exchanges for a select few into search results visible to millions. 

“ChatGPT conversations are private unless you choose to share them,” an OpenAI spokesperson told the outlet, emphasising that “shared chats are only visible in Google search if users explicitly select” a box marked “Make this chat discoverable.” 

The caveat explaining that this will make the chat appear in search engine results, however, appeared in smaller, lighter text easily missed by users.

Many seemingly did, as the nearly 4,500 conversations reviewed by Fast Company include potentially identifying personal details alongside sensitive information that few of us would intentionally share with millions.

[See more: China is making labels compulsory for all AI-generated content]

After the report sparked a firestorm on social media, OpenAI quickly moved to clean up the mess. “Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option,” Dane Stuckey, OpenAI’s chief information security officer, said in a post on X

“We’re also working to remove indexed content from the relevant search engines,” Stuckey added, writing off the feature as a “short-lived experiment.”

Carissa Veliz, an AI ethicist at the University of Oxford, told Fast Company: “Tech companies use the general population as guinea pigs. They do something, they try it out on the population, and see if somebody complains.” 

OpenAI is far from alone in treating user privacy cavalierly. Back in April, Meta launched its stand-alone AI app, with many users’ private queries ending up published in the app’s Discover feed. Billed as “a place to share and explore how others are using AI,” users were repeatedly nudged toward sharing, many without even knowing what it meant.

“People expect they can use tools like ChatGPT completely privately,” Rachel Tobac, a cybersecurity analyst and CEO of SocialProof Security, told Fast Company, “but the reality is that many users aren’t fully grasping that these platforms have features that could unintentionally leak their most private questions, stories, and fears.”