The Soft Bigotry of Algorithmic Moderation

Currents


 

McKinsey predicts that algorithms will automate 45 million jobs by 2030. Amid the sudden explosion of content-creating A.I., such as DALL-E, GPT-3, and others, scientists and journalists have begun to proclaim that the end is near for many creative professions. It is no longer only truckers and cashiers who have to worry, writes Nick Bilton, “New advancements in A.I. have made it clear that writers, illustrators, photographers, journalists, and novelists could soon be driven from the workforce and replaced by high-tech player pianos.” Left unspoken was the dark flip side of content creation: moderation. If the role of the author is unsafe, the role of the editor isn’t far off. With regard to freedom of expression, as well as the quality of said expression, moderation-by-algorithm poses a unique threat, especially to those who refuse to color within the lines. The arc of history may bend toward justice when it comes to humans, but algorithms have proven themselves capable of a variety of bigotries.

On Easter of 2009, Jeff Bezos stole the headlines from Jesus Christ himself. An Amazon technician had changed a label in a single database to “adult.” Like a crack through ice, the change tore through Amazon’s product catalogs, de-listing each and every book that had been tagged with similar metadata. It so happened that the database in question was one containing a preponderance of LGBT authors. Immediately, thousands of users and newspapers were reporting the disappearance of gay, lesbian, and bi titles from the website. Outraged, Amazon users, authors, and activists took to Twitter, sending Jeff Bezos trending for all the wrong reasons. This was not a case of homophobia; Amazon was quick to reply, but rather a case of human error that had reverberated throughout the algorithmic architecture of the company. Amazon didn’t stereotype LGBT content; the algorithm did.

Almost 15 years later, little has changed. Algorithmic bias is stronger than ever, for not only are the databases still organized around the same immutable characteristics (skin colour, sex, sexual orientation, etc.), they are growing faster and vaster by the second. And Amazon is not alone. YouTube, Facebook, Instagram, Twitter, and Twitch — the algorithms govern them all. Google any of these platforms (or even Google itself) with the search query “+ algorithm + homophobia,” and you will see that cases of homophobic discrimination are as abundant as they are bewildering. The same is true for sexism, racism, political affiliation, religious denomination — every type that can be stereo’d. This isn’t entirely the fault of the platforms themselves. Thanks to the scale, scope, and speed of the digital age, the processing power of even the most brilliant and least bigoted brains are struggling to keep up.

Every minute, 500 hours of video are uploaded to YouTube; Instagram processes 95 million photos per day; Facebook has grown to almost three billion users in 2021. To cope with this influx, moderation has taken an algorithmic turn toward machine learning. This is unsurprising, for the cost of hiring enough human moderators to process the data would quickly snowball into billions of dollars. Lacking the intuition, nuance, and common sense of their human predecessors, our new A.I. hall monitors are remaking the landscape of acceptability.

Social media platforms are now nudging creators away from the cutting edge and toward the banal; toward, in the words of YouTube, “Content that meets a higher level of brand safety.” “The sponsor, the merchant, has been living at the summit of our communications system,” wrote Eric Barnouw in 1978, lamenting how market values had set up “A buffer zone of approved ‘culture.’” For mainstream creators, whose content suits the PC and PG nature of the algorithmic buffer zone, platforms have brought newfound potentiality. For creators on the periphery, however, particularly those of the genre-bending and norm-breaking variety, whose content often cuts against the grain of popular (or programmable) opinion, platforms are heralding a new dawn of precarity. Industrial scale is antithetical to arthouse sensibility. Increasingly, however, industrial scale is the only game in town.

 
 

It’s not just gay, lesbian, bisexual, and trans content that is undermined by algorithmic biases — advocacy for decriminalizing prostitution, protecting sex workers, or even recognizing polyamorous relationships gets caught in the dragnet. Anything that can be lumped in with “sex” is suspect. Indeed, anyone who is queer in the most fundamental sense — that is to say, anyone who deviates from the norm — must now weigh the value of their free expression against the risk of being throttled by the invisible hand of the attention economy. Information flows are caught in a digital pincer movement, with moderation on one flank, and promotion swooping in on the other. Algorithms boost not only what falls into their ever-narrowing whitelist, but what drives the lucrative traffic and clicks upon which tech giants thrive. This means that nuanced political analysis flies under the radar while the algorithm puts its electric finger on the scale in favor of hyper-partisan hacks. The implications of this reach all the way to democracy itself. At the beginning of the 19th century, Percy Shelley wrote that poets were the unacknowledged legislators of the world. Two centuries later, platforms have grabbed the wheel.

The likes of Facebook and YouTube are not only driving dangerously — they’re also driving blind. As Amazon displayed many years ago, algorithms alone cannot be trusted to keep an eye on culture. We humans, too, must keep an eye on them. Those billions of dollars that would be needed to rehumanize content moderation? It is not only an investment in the interest of society, but in the interest of the platforms themselves. For once social media becomes little more than a war between bots and algorithms, humans will look elsewhere. As computer engineer Bill Joy wrote, problems arising in the management of technology cannot be solved with more technology.

To fight the growing crisis of content moderation, the likes of Facebook, YouTube and Twitter are making an attempt to turn back the tide. Homo sapiens are tagging in. Mark Zuckerberg has made a point of hiring thousands of new moderators over the past couple of years, Susan Wojcicki has spearheaded the reassignment of existing employees to the task and, before his departure, Jack Dorsey rolled back features that were putting a strain on human oversight. Despite the headlines and PR campaigns, however, all this is too little to address the problem. In a three-month period late in 2019, YouTube deleted five million videos. Every hour, they terminate nearly 2,000 channels. “Today, more than 90% of the content that we remove from YouTube is first detected by machine learning systems,” explains Jennifer Flannery O’Connor, Director of Trust and Safety. Between January and June of 2020, Twitter removed potentially offensive content from 1.9 million accounts, as well as issuing 925,700 suspensions for violating rules. If, as Zuckerberg estimates, Facebook gets 10% of its moderation decisions wrong, how do the failure rates of YouTube and Twitter compare? Whatever the true percentage of false positives may be, the cumulative effect bears greatly on smaller creators — especially those who are not famous enough to kick up a fuss, or wealthy enough to shrug it off.

This can be seen most clearly with the algorithmic biases of YouTube’s SEO (word-based) content moderation, whose overfitting (taking words out of context) has disproportionately targeted the very “sensitive subjects” that the platform claims to support, such as one of YouTube’s own “Spotlight” videos celebrating Pride Month, where the algorithm flagged the video of an LGBT couple reading their wedding vows as “inappropriate content.”

“We had this ambient awareness of our dependence on these big tech platforms…But there’s nothing like having your livelihood snatched away from you to make you feel really disempowered,” wrote Ash Sarkar, a Contributing Editor to Novara, the independent news outlet whose YouTube channel was deactivated. YouTube notified them by email that they were guilty of “repeated violations” of the community guidelines, but without specifying how. Novara’s 175,000 subscribers and Sarkar’s 350,000 Twitter followers whipped up enough of an outcry to cause YouTube to quickly restore the account, along with the admission that the media outlet had somehow been mistakenly flagged as spam.

In the case of right-wing political YouTuber Steven Crowder, the platform’s moderation team deliberately demonetized his channel not by mistake, but for repeated homophobic remarks directed at a gay reporter with whom he was feuding. Crowder himself was unaffected. His audience is large enough that he makes most of his money selling merchandise, subscriptions, and videos on his own website. “This really isn’t that big of a ding for us,” he said. Time and again, we see the same pattern: large creators can weather the storm, while smaller ones are swept away. Demonetization is not only creating “A steeper and longer on-ramp for content creators to join the YouTube Partners’ Program,” writes professor Sangeet Kumar, it is making it difficult for outside-the-box thinkers and creators to exist on YouTube at all. It has become a form of deterrence which dissuades risk-taking more than rule-breaking.

If content moderation across all platforms continues along its current path, we run the risk — personally, socially, and culturally — of fostering a system of implicit disincentives and unwritten rules: a creative straitjacket, which may make the mainstream too protective and profitable to resist. It is not only true creatives who will suffer, but creativity itself.

What can be done? Well, if Twitter is to remain a place worth reading, and YouTube a place worth watching, the likes of GPT-3 and DALL-E cannot be given free rein. More is different, as Philip W. Anderson famously wrote, but not necessarily better. If quantity is not to destroy quality, social media must not only keep humans in the loop, but add thousands more. We can entrust neither YouTube’s algorithms nor OpenAI’s ethics to safeguard the Overton window for us, nor should we expect them to. While the algorithms are designed to filter out harmful content like misinformation and extremism (as each company defines it) on paper, in practice, they sweep up many false positives and funnel creators into a constricted handful of niches where audience capture can easily occur — reinforcing groupthink. The end result, ironically, is a landscape that allows extreme partisanship to survive and flourish (as nuanced alternatives are drowned out). Conformity doesn’t necessarily mean milquetoast centrism. The narrowing of the Overton window is the narrowing of discourse, not of thought. Technology can censor but it cannot rebut.

Becky Lucas. Jayden Croft. David Hoffman. Juliana Sabo. The greatest casualties of content moderation are the names you have never heard of, and never will: the working class, the avant-garde, the up-and-coming, the edgy, the heterodox, and the queer. The thousands (if not millions) of independent and minority creators, for whom moderation errors are not only soul-destroying, but career-ending. The freedom to be different is one we must buttress with our words, our attention, and — yes, Big Tech — our wallets. Pay it now, Zuckerberg & Co. It will cost you more later; everything, perhaps.

Published Sep 11, 2022
Updated Sep 12, 2022