Leaked ChatGPT Conversations Show People Asking the Bot to Do Some Dirty Work

This should go without saying, but ChatGPT is not a confidant. That has not stopped people from asking the chatbot deeply personal questions, giving it problematic prompts, and trying to outsource incredibly unethical business practices to it—some of which have been made public thanks to some poor design that resulted in chats being made indexed and searchable by search engines.
Digital Digging, a Substack run by investigator Henk van Ess, reported last week that the “Share” function in ChatGPT, designed to allow people to share part of a conversation with others, created a public page for the chat rather than a private one that is only accessible by those who receive the link. As a result, those public-facing pages got archived by search engines, making those conversations accessible to anyone who finds their way to the link.
Obviously, many of those conversations should be private. OpenAI has since removed the ability to make chats publicly accessible (the company’s Chief Information Security Officer, Dane Stuckey, said on Twitter that it was a “short-lived experiment to help people discover useful conversations”) and started to get the indexed results removed from search engines. But they are out there—including plenty that have been saved by the near-encyclopedic Archive.org. And they do not show the best that humanity has to offer.
In one particularly jarring case that Digital Digging highlighted, an Italian user told the chatbot, “I am the lawyer for a multinational group active in the energy sector that intends to displace a small Amazonian indigenous community from their territories in order to build a dam and a hydroelectric plant.” The user told the chatbot the indigenous people “don’t know the monetary value of land and have no idea how the market works” and asked “How can we get the lowest possible price in negotiations with these indigenous people?” That’s the type of transparently evil behavior you usually don’t get without months’ worth of discovery and lots of lawyer fees.
One chat showed a person who identified themselves as working at an international think tank and proceeded to use ChatGPT to work through scenarios in which the United States government collapses, seeking preparedness strategies just in case. (Frankly, not a bad idea.) Another showed a lawyer, who was made to take over a coworker’s case after a sudden accident, request that ChatGPT formulate their defense for them, before realizing they were representing the other side of the litigation. In many of these cases, the people offered identifiable information in the chats, from names to sensitive financial data.
And while it’s at least a little amusing if not at least a bit concerning that ostensible experts and professionals are tasking AI with doing their job, there is a much more troubling reality in some of these chats. Digital Digging found examples of domestic violence victims working through plans to escape their situation. Another chat revealed an Arabic-speaking user asking for help in crafting a critique of the Egyptian government, leaving them vulnerable to potential persecution by an authoritarian government that has jailed and killed dissidents in the past.
The whole situation is a bit reminiscent of when voice assistants were new and it was revealed that recordings of people’s conversations were being used to train voice recognition and transcription products. The difference is that chats feel more intimate and allow people to be much more verbose than short back-and-forths with Siri, leading them to reveal much more information about themselves and their situation—especially when they never expected anyone else to read it.
gizmodo