Regulating Tech Companies
Shortly before a gunman opened fire in an El Paso Walmart in 2019, killing 22 people, a four-page anti-immigrant screed appeared on the online message board 8chan. It was the third such message posted on the site – notorious for its hateful and violent user-generated content – before a mass shooting that year alone.
8chan’s connection to the El Paso shooting raised a now-constant question: how much responsibility should tech companies bear for allowing objectionable content to reach an audience on the internet?
While 8chan became a major flashpoint in the debate over if and how tech companies should police content on their sites, it was hardly the first. The U.N. has blamed Facebook for provoking violence against the Rohingya minority in Myanmar. Russia spearheaded a campaign of numerous fake Twitter accounts and misleading Facebook ads to influence the 2016 presidential election. A company called Cloudflare, which provided web-based services to 8chan, announced several days after the El Paso shooting that it would terminate 8chan as a customer.
Many experts, members of Congress, and ordinary citizens have argued that we need new rules for tech companies, particularly social media platforms. But managing massive amounts of online content is not straightforward, and different companies are taking different approaches, raising thorny questions along the way.
A Platform, Not a Publisher
Tech companies did not invent the problem of controversial content. In 1800, a journalist distributed a pamphlet criticizing President John Adams and was convicted under the Alien and Sedition Acts – which Adams had signed into law – for doing so. But social media companies argue that they occupy a fundamentally different place in the media landscape than a pamphlet distributor because they are not publishers of information; they are simply platforms on which individuals can post content. Further, U.S. law shields these companies under Section 230 of the Communications Decency Act, which says that online platforms cannot be sued for content posted by their users.
The major social media companies do have rules about what kinds of content is unacceptable on their platforms: harassment, crime, and pornography, for example. But deciding how to handle problematic content is an ongoing battle. Here’s what three of the major platforms are doing, and how they have handled recent controversies.
What it’s doing: Facebook has taken several steps to improve its platform, such as working with independent fact-checkers, employing machine learning to fight fraud, and making it easier to flag misinformation. The company also launched two initiatives to help people make better decisions about what they read and share online. Facebook additionally announced that it would build an independent board that will decide what kinds of content to allow.
Tricky decisions: In May, a doctored video of House Speaker Nancy Pelosi, apparently slurring her words, began circulating online. Thirty-two hours later, Facebook attached a warning to the video saying that independent fact-checkers had determined it to be inauthentic, and downgraded how often the video appeared in users’ news feeds. What Facebook did not do was remove the video, saying “We think it’s important for people to make their own informed choice about what to believe.”
YouTube
What it’s doing: Earlier this year, “YouTube started to limit recommendations for content that it considers to be harmful misinformation, such as videos that promote the flat Earth theory, promise a miracle cure for a disease, or spread 9/11 conspiracy theories, in the U.S.,” according to Vox. The company also (eventually) disabled hundreds of channels dedicated to child exploitation videos.
Tricky decisions: Earlier this summer, YouTube followed Facebook in announcing that it would remove videos that glorified white supremacy, and began removing such content. In the process, however, the company removed a number of educational videos, including one about Nazi Germany posted by a teacher. The incident demonstrated how difficult it is for an algorithm to determine what constitutes hate speech without the context a human moderator would bring to the job.
The doctored Nancy Pelosi video also found its way onto YouTube, which, unlike Facebook, took it down.
What it’s doing: Twitter has often been slower to respond to problematic content on its platform. Last year, it revoked access to Alex Jones, a prominent conspiracy theorist and denier of the Sandy Hook shooting – but only after other platforms kicked him off first. And it banned Laura Loomer, a conservative activist who the company said violated its terms of service by posting hateful content against Congressional Representative Ilhan Omar. More recently, it joined Facebook in removing thousands of accounts linked to the Chinese government.
Tricky decisions: While 8chan is still offline, it maintains its verified account on Twitter, signifying that Twitter is currently taking no action against it (Twitter assigns “verified” status to accounts of public interest that it confirms are authentic). Last month, a coalition of civil and digital rights activists called on Twitter to ban white supremacists from its platform, something Facebook and YouTube have already done.
What’s Next?
While problematic online content is inflicting real harm on our social and political discourse, there is little agreement on how to address the issue. One tech writer notes that forcing tech companies to take greater responsibility for the content on their platforms “could also have the opposite effect than what many critics want: better policing their own content could actually increase the power that tech platforms have to shape our lives.” As Congress, tech companies, civil libertarians, and advocacy groups hash out a framework that makes sense in the age of online news, our best defense for now may be better media literacy.
Learn More
Weaponized Information: A widely-shared 2018 op-ed by tech reporter Kara Swisher says that social media companies have become modern arms dealers – via The New York Times
Facebook’s Fight in Europe: The social network waged a years-long lobbying war against regulation in the European Union, including measures that would make companies liable for content on their platforms – via Politico
The Market Has Spoken: A conservative writer argues that tech companies are already engaged in censorship and that users don’t care – via National Review
Not Just Social Platforms: A recent investigation found hundreds of unsafe or banned products on Amazon, raising questions about how well Amazon is vetting the products sold on its platform – via Wall Street Journal