No one wants to build a “feel good” internet


If there is one policy dilemma facing nearly every tech company today, it is what to do about “content moderation,” the almost-Orwellian term for censorship.

Charlie Warzel of Buzzfeed pointedly asked the question a little more than a week ago: “How is it that the average untrained human can do something that multibillion-dollar technology companies that pride themselves on innovation cannot? And beyond that, why is it that — after multiple national tragedies politicized by malicious hoaxes and misinformation — such a question even needs to be asked?”

For years, companies like Facebook, Twitter, YouTube, and others have avoided putting serious resources behind implementing moderation, preferring relatively small teams of moderators coupled with basic crowdsourced flagging tools to prioritize the worst offending content.

There has been something of a revolution in thinking though over the past few months, as opposition to content moderation retreats in the face of repeated public outcries.

In his message on global community, Mark Zuckerberg asked “How do we help people build a safe community that prevents harm, helps during crises and rebuilds afterwards in a world where anyone across the world can affect us?” (emphasis mine) Meanwhile, Jack Dorsey tweeted this week that “We’re committing Twitter to help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.”

Both messages are wonderful paeans to better community and integrity. There is just one problem: neither company truly wants to wade into the politics of censorship, which is what it will take to make a “feel good” internet.

Take just the most recent example. The New York Times on Friday wrote that Facebook will allow a photo of a bare-chested male on its platform, but will block photos of women showing the skin on their backs. “For advertisers, debating what constitutes ‘adult content’ with those human reviewers can be frustrating,” the article notes. “Goodbye Bread, an edgy online retailer for young women, said it had a heated debate with Facebook in December over the image of young woman modeling a leopard-print mesh shirt. Facebook said the picture was too suggestive.”

Or rewind a bit in time to the controversy over Nick Ut’s famous Vietnam War photograph entitled “Napalm Girl.” Facebook’s content moderation initially banned the photo, then the company unbanned it following a public outcry over censorship. Is it nudity? Well, yes, there is are breasts exposed. Is it violent? Yet, it is a picture from a war.

Whatever your politics, and whatever your proclivities toward or against suggestive or violent imagery, the reality is that there is simply no obviously “right” answer in many of these cases. Facebook and other social networks are determining taste, but taste differs widely from group to group and person to person. It’s as if you have melded the audiences from Penthouse and Focus on the Family Magazine together and delivered to them the same editorial product.

The answer to Warzel’s question is obvious in retrospect. Yes, tech companies have failed to invest in content moderation, and for a specific reason: it’s intentional. There is an old saw about work: if you don’t want to be asked to do something, be really, really bad at it, so then no one will ask you to do it again. Silicon Valley tech companies are really, really, bad about content moderation, not because they can’t do it, but because they specifically don’t want to.

It’s not hard to understand why. Suppressing speech is anathema not just to the U.S. constitution and its First Amendment, and not just to the libertarian ethos that pervades Silicon Valley companies, but also to the safe harbor legal framework that protects online sites from taking responsibility for their content in the first place. No company wants to cross so many simultaneous tripwires.

Let’s be clear too that there are ways of doing content moderation at scale. China does it today through a set of technologies generally referred to as the Great Firewall, as well as an army of content moderators that some estimate reaches past two million individuals. South Korea, a democracy rated free by Freedom House, has had a complicated history of requiring comments on the internet to be attached to a user’s national identification number to prevent “misinformation” from spreading.

Facebook, Google (and by extension, YouTube), and Twitter are at a scale where they can do content moderation this way if they really wanted to. Facebook could hire hundreds of thousands of people in the Midwest, which Zuckerberg just toured, and provide decent paying, flexible jobs reading over posts and verifying images. Posts could require a user’s Social Security Number to ensure that content came from bona fide humans.

As of last year, users on YouTube uploaded 400 hours of video per minute. Maintaining real-time content moderation would require 24,000 people working every hour of the day, at a cost of $8.6 million per day or $3.1 billion per year (assuming a $15 hourly wage). That’s of course a very liberal estimate: artificial intelligence and crowdsourced flagging can provide at least some level of leverage, and it almost certainly the case that not every video needs to be reviewed as carefully or in real-time.

Yes, it’s expensive — YouTube financials are not disclosed by Alphabet, but analysts put the service’s revenues as high as $15 billion. And yes, hiring and training tens of thousands of people is a huge undertaking, but the internet could be made “safe” for its users if any of these companies truly wanted to.

But then we go back to the challenge laid out before: what is YouTube’s taste? What is allowed and what is not? China solves this by declaring certain online discussions illegal. China Digital Times, for instance, has extensively covered the evolving blacklists of words disseminated by the government around particularly contentious topics.

That doesn’t mean the rules lack nuance. Gary King and a team of researchers at Harvard concluded in a brilliant study that China allows for criticism of the government, but specifically bans any conversation that calls for collective action — often even if it is in favor of the government. That’s a very clear bright line for content moderators to follow, not to mention that mistakes are fine: if one post accidentally gets blocked, the Chinese government really doesn’t care.

The U.S. has thankfully very few rules around speech, and today’s content moderation systems generally handle those expeditiously. What’s left is the ambiguous speech that crosses the line for some people and not for others, which is why Facebook and other social networks get castigated by the press for blocking Napalm Girl or the back of a female’s body.

Facebook, ingeniously, has a solution for all of this. It has declared that it wants the feed to show more content from family and friends, rather than the sort of viral content that has been controversial in the past. By focusing on content from friends, the feed can show more positive, engaging content that improves a user’s state of mind.

I say it is ingenious though, because emphasizing content from family and friends is really just a method of insulating a user’s echo chamber even further. Sociologists have longed studied social network homophily, the strong tendency of people to know those similar to themselves. A friend sharing a post isn’t just more organic, it’s also content you’re more likely to agree with in the first place.

Do we want to live in an echo chamber, or do we want to be bombarded by negative, and sometimes hurtful content? That ultimately is what I mean when I say that building a feel good internet is impossible. The more we want positivity and uplifting stories in our streams of content, the more we need to blank out not just the racist and vile material that Twitter and other social networks purvey, but also the kinds of negative stories about politics, war, and peace that are necessary for democratic citizenship.

Ignorance is ultimately bliss, but the Internet was designed to provide the most amount of information with the most speed. The two goals directly compete, and Silicon Valley companies are rightfully dragging their heels in avoiding deep content moderation.

Featured Image: Artyom Geodakyan/TASS/Getty Images

Source link

Leave a Reply