How Reddit turned its millions of users into an army of content moderation

How Reddit turned its millions of users into an army of content moderation One of the toughest problems for Reddit, the self-proclaimed front page of the internet, is figuring out what should and shouldn't appear in their feeds. When it comes to content moderation, which has become an increasingly publicized topic in recent years, Reddit takes a different approach compared to other major social platforms. Unlike Facebook, for example, which outsources much of the work to moderation farms, Reddit relies heavily on its communities (or subreddits) to police itself. The efforts of volunteer moderators are guided by the rules established by each subreddit, but also by a set of values ​​created and enforced by Reddit. The company, however, has been criticized for this model, which some have interpreted as laissez-faire and a lack of responsibility. But Chris Slowe, CTO of Reddit, says that this is a total misrepresentation. “It may sound crazy to say on the internet today, but average humans are pretty good. If you look at Reddit on a large scale, people are creative, fun, collaborative and fun, all the things that make civilization work,” he told TechRadar Pro. “Our underlying approach is that we want communities to establish their own cultures, political and philosophical systems. For this model to work, we must provide tools and capabilities to address minority."

A different beast

Slowe was Reddit's first employee, hired in 2005 as an engineer after renting two spare rooms from co-founders Steve Huffman and Alexis Ohanian. The three had met during the first run of the now-infamous Y Combinator accelerator program, which left Slowe with great memories, but also a botched start and time to fill. Although he took a hiatus from Reddit between 2010 and 2015, Slowe's experience gives him a unique perspective on how the business has grown and how the challenges he faces have changed over time. In the early years, he says, it was about developing the infrastructure to cope with the growth in traffic. But in its second term, from 2016 to today, the focus has shifted to user trust, security, and protection. “We provide tools for users to report content that violates site policies or rules set by moderators, but not everything is flagged. And in some cases the report says that it is too late, ”he explained. “When I came back in 2016, one of my main jobs was figuring out exactly how Reddit communities work and what makes a site healthy. Once we identify the unhealthy symptoms, we work from there.

Reddit

(Image credit: Reddit)

Self control

Unlike other social platforms, Reddit has a layered approach to content moderation, which is designed to adhere as closely as possible to the company's “community first” spirit. The most primitive form of content verification is done by the users themselves, who have the power to vote for things they like and dislike. However, while this process stimulates popular posts and squashes unpopular posts, popularity is not always a sign of ownership. Community mods act as the second line of defense and are armed with the power to remove posts and ban users who violate the guidelines or content policy. The most common subreddit rule, according to Slowe, is essentially "don't be an idiot." The company's annual transparency report, which breaks down all content removed from Reddit each year, suggests that mods are responsible for roughly two-thirds of all post removals. To detect any harmful content missed by mods, there are Reddit administrators, who are directly employed by the company. These staff members perform manual spot checks, but are also armed with technology tools to help identify problem users and monitor individual interactions that take place in private. "We use a number of signals to surface issues and establish whether individual users are trustworthy and have acted in good faith," Slowe said. “The tricky part is that you will never understand everything. And that's partly because it's always going to be a little gray and it's going to depend on the context. "When asked how this situation could be improved, Slowe explained that he is stuck in a difficult position; torn between the desire to enforce community-driven company policy and the knowledge that there are future technologies on the market that could help detect a higher percentage of abuse. For example, Reddit is already beginning to use advanced natural language processing (NLP) techniques to more accurately measure the sentiment of user interactions. Slowe also noted the possibility of using AI to analyze images posted to the platform, and admitted that more moderation actions would occur without human intervention over time. However, he also cautioned against the fallibility of these newer systems, which are prone to bias and certainly error-prone, and the challenges they could pose to the Reddit model. “It's a bit scary, actually. If we talk about this as an enforcement model, it's the same as putting cameras literally everywhere and relying on big minded machines to let us know when there's a crime,” he stated.

Reddit

(Image credit: Reddit)

When the going gets tough

Content moderation is a problem that none of the social media giants can claim to have solved, as evidenced by the debate over Donald Trump's accounts and the ban on the Talk app store. Reddit also participated in these conversations, ultimately making the decision to ban the r/DonaldTrump subreddit. As powerful as the community-driven model is, there is significant conflict at the heart of Reddit's approach. The company aspires to give its communities near-total autonomy, but is ultimately forced to make editorial decisions about where to draw the line. "I don't want to be the meticulous, arbitrary arbiter of what's right and what's wrong," Slowe told us. “But at the same time, we have to be able to apply a set of . It's a fine line to walk. Reddit tries to keep its content policy as concise as possible to eliminate loopholes and make enforcement easier, but revisions are common. For example, revenge porn was banned from the platform in 2015 under former CEO Ellen Pao. Last year, the company added a clause that prohibits the glorification of violence. “Staying true to our values ​​also means reiterating our values, reassessing them as we discover new ways to game the system and push the limits,” Slowe explained. "When we make a change that involves moving communities down the line, it's the end of a long process of finding loopholes in our content policy and going back from there." However, while most will agree that the absence of revenge porn is an absolute positive and that incitement to violence has taken place on r/The_Donald, both examples demonstrate that Reddit should exercise restraint in the same vein as Facebook. , Twitter or any other. another platform. When hard questions need to be asked, in other words, Reddit no longer trusts its communities to respond with a favorable answer.