Cass R. Sunstein
The New York Times
TT

How Government Should Regulate Social Media Lies

“The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic.”

— US Supreme Court Justice Oliver Wendell Holmes Jr., Schenck v. US, 1919

A lot of people are falsely shouting fire these days, and causing panics. Should they be punished? What about the platforms that host them?

For some shouts, the answer is clearly yes. In 2019, Facebook’s Mark Zuckerberg called for national regulation, specifically emphasizing harmful content and the integrity of elections. Whatever you think of his particular proposals, he pointed in promising directions.

In the last year, Twitter and Facebook have taken significant voluntary steps to combat misinformation, including warnings, reduced circulation and removal.

Should the government step in to oversee those steps? Should it require them? Should it forbid them? Should it demand more?

To answer these questions, we need to engage the First Amendment. The Supreme Court did that in 2012, offering something like a green light for falsehoods. In a key passage in the case of US v. Alvarez, the court invoked the totalitarian dystopia of George Orwell’s “1984” to declare, “Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth.”

The case involved Xavier Alvarez, an inveterate liar who falsely claimed that he had been awarded the Congressional Medal of Honor. That claim violated the Stolen Valor Act, which made telling that particular lie a crime.

The court struck down the law, ruling that Alvarez’s lie was protected by the First Amendment. The court feared a “chilling effect” on speakers and speech. Justice Anthony Kennedy explained:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. … Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out.

Still, the court did not deny, and could not deny, that plenty of falsehoods can be regulated or forbidden. It is a crime to lie to the Federal Bureau of Investigation. It is a crime to impersonate a federal officer. If you sell a useless medicine and falsely say that it will prevent cancer, you can be punished. If you tell the local authorities that you saw your neighbor selling heroin when you saw no such thing, you have violated the law.

If journalists write what they know to be false, and if the statement or report actually injures somebody, the First Amendment does not stand in the way of civil sanctions, even when the victim is a public official. Under current law, there is a lot that can be done to discourage and to punish defamatory statements.

All this provides something like a road map for government regulation of misinformation on social media. Section 230 of the 1996 Communications Decency Act, which is now under considerable pressure, gives social media platforms broad immunity for what they allow. If it is repealed (and it should be, at least in part) and if public officials decide to regulate falsehoods, they can start by going after false advertising, fraud and libel.

Can they do more? Absolutely.

In the Alvarez case, the court left open the possibility that if a falsehood created serious harms, regulation might be permissible. While the justices did not want a Ministry of Truth, they did not mean to allow those false cries of fire in a crowded theater, either. The hard part is determining what should count as a real-life equivalent of that famous hypothetical lie.

Facebook’s community standards offer a clue. In their current form, they allow Facebook to take down “misinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm.” Applying that standard, Facebook has done a great deal to restrict misinformation related to Covid-19 — and also to inform people who see false statements about Covid-19, before they were removed, that what they saw was false.

It’s a nice question what kinds of misinformation contribute to the risk of imminent violence or physical harm. We could easily imagine a serious argument that a lot of falsehoods about health or safety fall into that category. And some political claims — making false and hateful charges about people and events — might also be subject to regulation for that reason.

Whether regulation comes from government or social media platforms, it can take many forms, and it might not involve censorship at all. The best intervention might consist only of warnings such as, “This statement is FALSE.” Social media platforms can and sometimes do allow falsehoods while making them harder to see. Governments might consider building on existing approaches, perhaps by enacting into law the best social media practices, and thus requiring their widespread adoption.

In 1927, Justice Louis Brandeis wrote that the right remedy for harmful speech is “more speech, not enforced silence.” Usually that’s true. But not always. When false shouts of fire are causing illness and death, or seriously undermining democracy itself, it’s not enough to hide behind the First Amendment.

We need to ask: What are we going to do about it?

Bloomberg