OpenAI removed a controversial sharing option and began working to de-index exposed content.OpenAI has removed a controversial opt-in feature that had led to some private chats appearing in Google search results, following reporting by wired that found sensitive conversations were becoming publicly accessible.Earlier this week, wired revealed that private ChatGPT conversations some involving highly sensitive topics like drug use and sexual health were unexpectedly showing up in Google search results. The issue appeared to stem from arguably vague language in the app’s “Share” feature, which included an option that may have misled users into making their chats publicly searchable.
When users
clicked “Share,” they were presented with an option to tick a box labeled “Make
this chat discoverable.” Beneath that, in smaller, lighter text, was a caveat
explaining that the chat could then appear in search engine results.
Within hours
of the backlash spreading on social media, OpenAI pulled the feature and began
working to scrub exposed conversations from search results.
“Ultimately
we think this feature introduced too many opportunities for folks to
accidentally share things they didn’t intend to, so we’re removing the option,”
said Dane Stuckey, OpenAI’s chief information security officer, in a post on X.
“We’re also working to remove indexed content from the relevant search
engines.”
Stuckey’s
comments mark a reversal from the company’s stance earlier this week, when it
maintained that the feature’s labeling was sufficiently clear.
Rachel
Tobac, a cybersecurity analyst and CEO of SocialProof Security, commended
OpenAI for its prompt response once it became clear that users were
unintentionally sharing sensitive content. “We know that companies will make
mistakes sometimes, they may implement a feature on a website that users don’t
understand and impact their privacy or security,” she says. “It’s great to see
swift and decisive action from the ChatGPT team here to shut that feature down
and keep user’s privacy a top priority.”
In his post,
OpenAI’s Stuckey characterized the feature as a “short-lived experiment.” But
Carissa Véliz, an AI ethicist at the University of Oxford, says the
implications of such experiments are troubling.