What did you talk about with ChatGPT? 100,000 conversations are now public on Google: from company secrets to love dramas.

Can you imagine the most intimate or most lurid conversation you had with ChatGPT ending up floating around Google, accessible to anyone with a click? For thousands of ChatGPT users, that nightmare is already a reality. Among the nearly 100,000 exposed chats are everything from confidential contract drafts to unsent love messages , a silent testament to how we relate to these digital tools.
The scene is reminiscent of the early days of social media, when people posted without realizing the entire world could see it. This time, we're not talking about vacation photos or innocent comments, but about interactions with an AI that, for many, functioned as a work assistant , a romantic confessor, and an impromptu editor. The inevitable question is: How far does our responsibility as users extend, and how far does that of the platforms that promise privacy extend?
The leak not only exposes a design flaw in an experimental OpenAI feature. It also opens a disturbing window into human behavior toward technology: we entrust secrets and important documents to an algorithm, without questioning whether its "digital vault" is truly secure.
How nearly 100,000 ChatGPT conversations were leakedThe story begins with a seemingly innocuous feature: the "share" button in ChatGPT. Until a few days ago, any user could generate a public link to their conversation to send to third parties . These links, due to their open nature, were indexable by search engines like Google.
The result was a predictable chain of events. First, some users shared links with sensitive content without realizing they were publicly exposed. Then, Google crawled those URLs, including them in its index. Finally, researchers and curious individuals began locating and compiling the chats, forming a massive archive of nearly 100,000 conversations.
OpenAI confirmed the magnitude of the leak and reacted. Dane Stuckey, its CISO (Chief Security Officer), stated that they removed the feature to prevent people from accidentally sharing information they didn't want to make public. They also initiated a deindexing process in search engines , although it's too late to fully reverse the exposure: third parties have downloaded the material en masse.
What type of information was exposedThe dataset reveals the diversity of uses users give to ChatGPT. Among the indexed conversations are:
- Copies of alleged non-disclosure agreements from OpenAI and other companies.
- Draft contracts requested by business owners that include company names and business details.
- Emotional and personal consultations, such as letters to ex-partners or dilemmas about romantic relationships.
- More mundane requests, such as writing LinkedIn posts or proofreading texts.
This mix of the sensible and the banal demonstrates a phenomenon that privacy experts have warned about for years: any sharing feature, no matter how seemingly innocuous, can become a leak for private data if its scope is not clearly communicated.
The privacy and security implicationsThe exposure of these conversations raises several critical issues. The first is shared responsibility. OpenAI offered an opt-in system, but many users likely didn't understand that checking the "share" box implied global exposure. This gap between perception and reality is common in the digital age: we assume privacy is the norm when, in reality, every online interaction leaves traces.
Furthermore, the fact that third parties have already downloaded the dataset creates a persistent risk. Even if OpenAI succeeds in getting Google to remove the links , local copies can circulate in forums, leaked databases, or even on the dark web. Competing companies, attackers, or simply curious people have access to materials that include personal and corporate information.
Finally, the leak reinforces the need for AI-adapted digital literacy. Using ChatGPT as a secure personal assistant without understanding the mechanics of its functionalities exposes individuals and companies to reputational and legal risks.
Lessons for the future of AI and privacyThe episode of the 100,000 exposed chats offers a warning for the tech ecosystem. AI platforms will need to be more transparent about how their sharing options work, and likely design them with the privacy-by-default principle in mind.
For users, the lesson is twofold. First, don't thoughtlessly delegate the custody of confidential data to a cloud system without clear security guarantees. Second, understand that any "share" or "make public" button can be more comprehensive than it appears.
eleconomista