
The rise of artificial intelligence-generated content poses significant challenges for Reddit moderators, who are striving to maintain the platform’s authenticity. According to research led by Travis Lloyd, a doctoral student in information science, moderators express concerns over how AI could undermine the quality of content, disrupt community dynamics, and complicate governance. His findings are being presented at the ACM SIGCHI Conference on October 18-22, 2023, in Bergen, Norway, where the paper also received an honorable mention for best work.
Reddit, known as “the most human place on the internet,” has over 110 million daily active users engaging in discussions across various topics, from politics to entertainment. Users can share content, comment on others’ posts, and vote on submissions within specific categories called subreddits. As AI-generated content becomes more prevalent, moderators need to balance the benefits and drawbacks of this technology while preserving the community’s values.
Lloyd and his colleagues conducted interviews with moderators from popular subreddits, focusing on those who have established rules regarding AI content. They engaged with 15 moderators who collectively oversee more than 100 subreddits, with membership numbers ranging from 10 to over 32 million. While some moderators acknowledge the potential utility of AI, most view its presence negatively.
One moderator from the subreddit r/AskHistorians noted the value of AI in facilitating translations for non-English speakers. They explained how users could write in their native language and use AI tools like ChatGPT to translate their contributions into English. This process allows for the preservation of their intellectual input. Conversely, the moderator of r/WritingPrompts firmly stated, “Let’s be absolutely clear: you are not allowed to use AI in this subreddit; you will be banned.”
Concerns about content quality emerged as the primary issue among moderators. According to one participant, AI-generated posts often contain “frequent glaring errors in both style and content.” Inaccuracies and divergence from the intended topics were highlighted as significant drawbacks. Additionally, many moderators fear that AI’s influence could diminish meaningful interactions among users, leading to strained relationships and a breach of community values.
The challenges of moderating AI content extend beyond just maintaining quality. A moderator from r/explainlikeimfive emphasized, “I would rate it as the most threatening concern … It’s often hard to detect, and we do see it as very disruptive to the actual running of the site.” As volunteers, moderators are already tasked with significant responsibilities, and the influx of AI-generated content complicates their roles further.
Mor Naaman, a senior author on the paper and professor at Cornell Tech, emphasized the need for support in helping moderators navigate these challenges. “It remains a huge question of how they will achieve that goal,” he stated. “Reddit, the research community, and other platforms need to tackle this challenge or these online communities will fail under the pressure of AI.”
Despite the daunting landscape, Lloyd remains optimistic about the future of human interaction on the platform. He observed, “This study showed us there is an appetite for human interaction, too. As long as there is that desire, which I don’t see going away, I think people will try to create these human-only spaces. I don’t think it’s hopeless.”
This study receives support from the National Science Foundation, underscoring the importance of research into the impact of AI on digital communities. As Reddit continues to grapple with the implications of AI-generated content, the balance between technology and human connection will be pivotal in shaping the platform’s future.