Millions Use It Every Day. It's One of the Internet's Most Important Websites. Bots Are Destroying It, Piece by Piece.

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
In the years since ChatGPT's debut transformed Silicon Valley into an artificial intelligence hype factory, the internet's most vibrant communities have puzzled over how to adapt to the ensuing deluge of AI slop, especially as autogenerated outputs become more sophisticated. Perhaps no platform exemplifies this conundrum better than Reddit, the anonymized message-board network that's been connecting millions of humans across the world for 20 years now—as many users there increasingly wonder whether they are, indeed, still connecting with other humans .
Such concerns aren't new, but they've been heightened thanks to a shocking exercise of AI-powered manipulation. In late April, the moderation team for the popular subreddit r/ChangeMyView disclosed that researchers from the University of Zurich had conducted an “ unauthorized experiment ” on community members that “deployed AI-generated comments to study how AI could be used to change views.” The mods wrote that the Zurich academics had reached out in March to inform them that “over the past few months, we used multiple accounts” to publish AI-generated comments across r/ChangeMyView, acting as “a victim of rape” and “a black man opposed to Black Lives Matter,” among other hypothetical roles. “We did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible,” the team from Zurich wrote . “We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules.”
The r/ChangeMyView overseers did not agree with that point and responded by filing an ethics complaint with the university, demanding that the study not be published, and contacting higher-ups with Reddit's legal team. The unnamed researchers, who were invited to answer subreddit questions under the username LLMResearchTeam , insisted that “we believe the potential benefits of this research substantially outweigh its risks.” Already-outraged Redditors were not pleased. (A representative comment : “It potentially destabilizes an extremely well moderated and effective community. That is real harm.”) Neither were Reddit's executives: Chief Legal Officer Ben Lee himself posted that “we are in the process of reaching out to the University of Zurich and this particular research team with formal legal demands,” further noting that company administrators had banned all the accounts used for the experiment. The school subsequently told 404 Media that it would not publish the study, and the researchers issued a formal apology to r/ChangeMyView in early May, claiming they “regret the discomfort” and offering to “collaborate” with the subreddit to instill protective measures against other breaches of community trust.
Moderators noted that they'd declined the researchers' collaboration offer, noting, “This event has adversely impacted CMV in ways that we are still trying to unpack.” One user who has been an active commentator in r/ChangeMyView for nearly five years wrote that the fallout from the experiment “kinda killed my interest in posting. I think I learned a lot here, but, there's too many AI bots for Reddit to be much fun anymore. I've gotten almost paranoid about it.” Another user agreed that “so many popular subs are full of AI posts and bot comments.” Links to news coverage of the Zurich study shared in other subreddits consistently invoked the “ dead internet theory ,” the long-standing contention that the majority of cyberspace is populated only by bots that interact with other bots. In a Reddit message, the moderators of the popular r/PublicFreakout subreddit told me that the news did “confirm our suspicions that these bot farms are active and successful across Reddit.” Another user wrote to me, on condition of anonymity, that they now view every interaction with “increased suspicion.”
Brandon Chin-Shue, ar/ChangeMyView moderator who has been a Redditor for 15 years, told me that studies have been conducted within the subreddit before—but only with the moderators' express approval ahead of time, along with notifications to their users. “Every couple of months we usually get a teacher who wants their students to come on ChangeMyView so that they can learn how discussion and debate works, or there's a research assistant who asks about scraping some information,” Chin-Shue said. “Over the past few years, ChangeMyView has been more or less very open to these sorts of things.” He pointed to a recent study that ChatGPT creator OpenAI held on the subreddit to test how “persuasive” its new o3-mini model could be in generating responses to arguments; the startup had also carried out a similar experiment on r/ChangeMyView for its o1 model in 2024. OpenAI has also tested generative-text models on boards like r/AskReddit , r/SubSimulatorGPT2 , and r/AITA (aka “Am I the Asshole?”).
“We have pretty strict rules that not only the users but the moderators have to follow,” Chin-Shue added. “We try to be good about communicating to our users. Every time we've received and granted a request for research, we let the people know.” He also stated that he and his fellow moderators were generally pleased with Reddit's response and assistance throughout this fiasco.
That hasn't always been the case, especially when it comes to the platform's AI-era adjustments. The explosion of ChatGPT's introduction in late 2022 ran headlong into Reddit's plans to operate less like a free forum and more like a self-sustaining business, with new revenue sources (paid subscriptions, ads) and a public offering on the stock market. The most controversial moneymaking decision was CEO Steve Huffman's plan to charge for access to Reddit's previously free data API, as a means of restricting the amount of Reddit text that web crawlers from AI firms could automatically ingest and use for model training. This inspired widespread revolt among Redditors , especially those who'd benefited from no-cost API access for coding mobile apps, user bots, and other extensions to enhance and customize the Reddit experience. Yet Huffman won out over these aggrieved users, crushing their rebellion , installing the price tag, bringing Reddit shares public, and even getting the company to its first profitable quarter in late 2024.
Perhaps most importantly, he established exclusive AI deals throughout that year. Google now pays the company $60 million a year for permission to train its AI models on Reddit text, with the tech giant also gaining exclusive rights to display Reddit pages in search indexes. (According to analysts, Reddit is now the second-most-cited website in Google's AI Overviews, right behind the increasingly AI-flooded Quora .) OpenAI—headed by former Reddit executive Sam Altman— established a partnership with the platform to cite Reddit output in ChatGPT answers, use Reddit ad slots for OpenAI promotion, and allow the social platform to employ OpenAI software for in-app features. ( Semafor reported Friday that Reddit is in talks with another Altman-founded company, Worldcoin, to have Redditors verify their identities via the startup's controversial eyeball-scanning tech .) Some of the in-house AI tools Reddit is now testing include a “ Reddit Answers ” generative-search feature and a “ Reddit Insights ” tool for advertisers to learn about topics that are driving mass interest and approval on the network. There are also the moderation bots, which, as I've reported, have curried some backlash for allegedly overaggressive policing of comments about controversial celebrities like Elon Musk and Luigi Mangione.
Still, gatekeeping Reddit has been an admitted struggle . In an interview last year with the Verge , Huffman raged against AI companies like Anthropic, Perplexity, and Microsoft for using Reddit data without proper compensation. At the time, Anthropic told the Verge, “Reddit has been on our block list for web crawling.” But Reddit still doesn't think Anthropic has been too forthcoming, and this month, it sued the AI startup for “continuing to hit Reddit's servers over one hundred thousand times” without permission, an allegation with which Anthropic “disagrees.” In a statement to Slate, Lee, Reddit's chief legal officer, wrote, “We will not tolerate profit-seeking entities like Anthropic commercially exploiting Reddit content for billions of dollars without any return for redditors or respect for their privacy.”
If there's been an even more difficult task than policing predatory AI crawlers, it's been keeping Reddit interactions personal, healthy, and as bot-free as possible. With mods working on a voluntary basis, there's only so much they can do to moderate comments and posts, especially if they're in charge of larger communities. The moderators for r/PublicFreakout, which has about 4.7 million members, wrote to me that they “get, on the low end, 250,000+ comments monthly .”
“We have a pretty large and active team but we cannot possibly read that many comments a month, and we definitely couldn't review the profiles of every single commenter,” one of the mods said. An ex-Redditor who went by the username Zakku_Rakusihi, acted as a moderator for the subreddit r/BrianThompsonMurder, and works with machine learning software in his day job, told me that “a sizable portion of Reddit does not agree with AI” and thus refuses to engage with it at all—which only makes it harder for users to spot potential telltale signs of automated responses or generated text. “A lot of users still treat the majority of interactions as human, even if some AI text is pretty obvious. I've had to point those out to people a few times, and they don't get it. It doesn't pop up in their brain automatically.” It's worse in particular when it comes to AI-generated imagery, he added: “In the art- and DIY-related subreddits I've helped with, we had to implement 'no AI-generated art' rules .”
What makes things more complicated is that Redditors have long suspected that the site has more bots than people think—going back to when the founders admitted to using fake accounts to create the illusion of activity in the platform's early days. Indeed, there were a lot of bots and fake users on Reddit long before ChatGPT brought generative AI mainstream. But it wasn't as though all those bots were deceptive or malevolent—many were tools deployed by moderators to ensure that communities were following the rules.
In the mid-2010s, data-security company Imperva released a report finding that in 2015 a slight majority of traffic across the entire web was generated by both “good” and “bad” bots. Since then, the “dead internet theory” antennae have been out for Reddit especially. In the post-ChatGPT era, you will find no shortage of forum posts and essays and Reddit posts purporting that this platform is now mostly bots interacting with other bots, a fate that has fallen such beloved internet repositories as Quora and DeviantArt . Even before the Zurich experiment was exposed, one r/ChangeMyView poster claimed that “there's a lot more bot activity going on currently than is openly talked about, and the effects of bots are more pronounced than people are willing to admit.”
Chin-Shue, the moderator, doesn't see it that way—yet. “I haven't seen anything that makes me convinced that that time is now,” he said. “We have other issues that actually are more annoying than AI,” including user dissatisfaction with the community's strict emphasis on neutrality and its tight moderation regime.
Still, “I think there's going to have to be some sort of reckoning on Reddit, because as the bots get better, it's going to be harder to keep yourself from being used by these bots,” Chin-Shue said. “When ChatGPT started being a thing, everybody was accusing everyone of being a bot. If you wanted to say somebody was a bad bot, you'd call them Grok. The worst thing that does is just muddy the waters and make everyone distrust each other.”
For what it's worth, Reddit executives have insisted, in public statements, that their platform should remain as human-centric as ever. In a May 5 post , Huffman himself acknowledged that “unwelcome AI in communities is a serious concern. It is the worry I hear most often these days from users and mods alike.” Then, he emphasized, “Our focus is, and always will be, on keeping Reddit a trusted place for human conversation.”
Lee wrote to Slate, “Now more than ever, people are seeking authentic human-to-human conversation.” He added, “These conversations don't happen anywhere else—and they're central to training language models like [Anthropic chatbot] Claude. This value was built through the effort of real communities, not by web crawlers.”
This is certainly true. But at a time when moderators are banning users who have unhealthy parasocial relationships with chatbots, police officers and independent AI enthusiasts are deploying their own bots and fake accounts across Reddit, and moderators are already winning from the effort of keeping their subreddits in check, how long will those “real communities” stay real?
