Care About Free Speech? Take on the Power of Facebook and Twitter.

It is incredibly important to protect free speech and, by extension, the internet as a space to cultivate and share ideas and viewpoints that may fall outside the mainstream. That means curbing the power of billionaires like Mark Zuckerberg.

Facebook founder and CEO Mark Zuckerberg arrives for the 2020 Munich Security Conference (MSC) on February 15, 2020 in Munich, Germany. Johannes Simon / Getty

Facebook has vowed to close the “trust deficit” after more than a hundred companies announced they were boycotting the social media company’s advertising platform. Mark Zuckerberg’s tepid apologies tend to arrive at regular intervals; his most recent comes in response to complaints that his platform doesn’t do enough to police hate speech.

As city streets across America became battlegrounds between protesters and cops in the wake of George Floyd’s murder on May 25, President Donald Trump took to social media to advocate for deadly force to be visited upon looters. The president’s message was flagged on Twitter for promoting violence but was allowed to stand untouched on Facebook, angering both Facebook employees and civil rights groups.

Zuckerberg claimed that, personally, he had a “visceral negative reaction” to Trump’s tweet, but, as “the leader of an institution committed to free expression,” he had no choice but to let it stand.

Observers were not so sure about his objectivity, pointing to a secret dinner held at the White House last fall that included Mark Zuckerberg and Priscilla Chan, Jared Kushner, Ivanka Trump, Peter Thiel, the president, and the first lady.

Stop Hate for Profit, an initiative spearheaded by the Anti-Defamation League, Color of Change, the NAACP, and other civil rights groups, was formed, calling on American corporations to “hit pause on hate” by withholding ad dollars from Facebook for the month of July.

Major US corporations — Unilever, Adidas, Coca-Cola, North Face, REI, Verizon — signed on. “There is no place for racism in the world and there is no place for racism on social media,” declared Coca-Cola chief executive James Quincey. When Verizon and Unilever joined the boycott, Facebook saw its shares fall 8.3 percent.

Forced to do something, Zuckerberg livestreamed an announcement on June 26 that, in future, the company would flag content that was “newsworthy” but violated its community standards. It also promised to have the Media Rating Council audit its community standards and to keep working with the Global Alliance for Responsible Media to improve its “digital ecosystem.”

The episode followed a now-familiar arc: Facebook receives criticism for the way its platform operates, promises to do better, makes some inconsequential tweaks, and then media attention moves on to something else.

In this particular interlude, corporations, which have long butted heads with Facebook and were happy to spend a bit less money this summer anyway, got a chance to appear woke. Many are likely still quietly advertising through Facebook internationally, on Instagram, and on third-party apps using the Facebook Audience Network.

Yet the predictability of the episode belies a broader shift. Slowly but surely, public opinion is moving toward the idea that online digital platforms and social media content should be more strictly regulated.

Stop Hate for Profit, for example, has made recommendations beyond its immediate call for a boycott. It wants Facebook to hire a “C-suite level” civil rights expert, “to submit to regular, third party, independent audits of identity-based hate and misinformation with summary results published on a publicly accessible website,” and to give advertisers their money back if their ads are shown next to objectionable content.

The organization has made more far-reaching demands as well. It wants Facebook to remove public and private groups dedicated to white supremacy, antisemitism, violent conspiracies, vaccine misinformation, and climate denialism; to eliminate the fact-checking exemption for political ads; to hire teams of people to “review submissions of identity-based hate and harassment;” and to hire real people to respond to individuals experiencing harassment on the site.

While Stop Hate for Profit’s campaign is pushing hard to de-platform offenders, other voices are calling for the elimination or overhaul of section 230 of the 1996 Communications Decency Act.

Section 230 was originally designed to protect internet companies by designating them as distributors of content, rather than providers, thereby shielding them from liability for the content that appeared on their sites while still giving them the right to police that same content as long as they were acting in “good faith” to uphold existing laws.

Aside from a few notable court cases, section 230 went largely unchallenged for the past two decades. Exceptions to the statute were made for copyright infringement, child pornography, and, more recently, sex trafficking, but for the most part, digital platforms like Facebook have been largely left to their own devices in moderating content.

No longer. The 2016 political election put social media companies in the spotlight, prompting calls to scrap the immunity Facebook, Twitter, and others enjoy. In the last few years, elected officials have argued for eliminating section 230 protections for big companies, for companies that use algorithms to sort user content, for companies that are not politically neutral, for those who use end-to-end encryption, and more.

In 2020, both Trump and Joe Biden have called for section 230 to be weakened or revoked. In March, a group of senators introduced EARN IT, a bill that would have removed 230 protections for any company that didn’t follow the “best practices” approved by attorney general William Barr. After strong public pushback, an amended version of the bill recently advanced out of committee that weakens online platforms’ section 230 protections, makes them subject to lawsuits from individual states, and, critics argue, opens the door for a ban on end-to-end encryption.

In June, Republican senators introduced a bill that addresses the issue of content enforcement. “For too long, Big Tech companies like Twitter, Google and Facebook have used their power to silence political speech from conservatives without any recourse for users,” Josh Hawley, a co-sponsor of the bill, along with Marco Rubio and Tom Cotton, argued. If the bill passes, individual users who think they have been unjustly censored will be able to sue social media companies for up to $5,000.

A milder, bipartisan Senate bill introduced last month — the Platform Accountability and Consumer Technology Act — would demand greater transparency and responsiveness from internet platforms and would exempt “the enforcement of federal civil laws from Section 230,” enabling the Department of Justice (DOJ) and the Federal Trade Commission to pursue civil actions against online platforms.

The DOJ also came out with its own rather murky recommendations last month. It argued that the original intent of section 230 has been lost, leaving tech companies with too much power and little incentive to police illicit activity. “The time has . . . come to realign the scope of Section 230 with the realities of the modern internet so that it continues to foster innovation and free speech but also provides stronger incentives for online platforms to address illicit material on their services.”

This “realignment” includes exceptions for “bad Samaritans,” child abuse, terrorism, cyber-stalking, and any case where platforms knowingly violated federal criminal laws; increased “civil enforcement capacities”; more clarity on anti-trust claims; and a proposal to rewrite the original statue to remove ambiguous language.

Long story short, a wide range of reforms are being demanded — and some are deeply problematic.

Some demands, such as calls for Facebook to reimburse advertisers for ads that appear in contexts that don’t support their brand, are business-friendly reforms designed with profits in mind. Along with calls for greater “transparency,” they do little to challenge tech companies’ underlying business model of invasive, persistent surveillance.

Other proposals that use the laudable goal of protecting children as a back door to eliminating end-to-end encryption and increasing government surveillance of digital content bear the stamp of law enforcement agencies’ long-standing desire for greater control over the digital sphere.

Still other demands, such as making section 230 protection contingent on digital platforms’ ability to convince external auditors that “their algorithms and content-removal policies are politically neutral” are a serious threat to free speech.

When unaccountable billionaire Mark Zuckerberg, who makes his money from selling access to our personal data, sets himself up as the protector of free speech, it’s easy to be cynical — to view free speech as nothing more than a smoke screen for self-interested actors to hide behind.

We should resist this impulse. It is incredibly important to protect free speech and, by extension, the internet as a space to cultivate and share ideas and viewpoints that may fall outside the mainstream.

This doesn’t mean that tech billionaires should get to police the (increasingly digital) landscape of public speech. But neither should right-wing politicians, law enforcement agencies, or business groups.

Preserving the internet as a place of free expression while simultaneously protecting the electoral process, shielding users from harassment and abuse, and reining in the power of hate groups is an incredibly difficult task. Indeed, one could argue that it is a defining challenge of the present moment.

Such an important challenge requires our full attention and participation.

The legislative and civil society solutions currently in play are deeply flawed. Now is the time for vigorous, democratic debate about the contours of free speech and the digital landscape we want to build.

The alternative is more of the same — or potentially something much worse.