Farhad Manjoo
TT

Facebook Is Bad. Fixing It Rashly Could Make It Much Worse.

The nicest thing you can say about the Health Misinformation Act, proposed in July by the Democratic senators Amy Klobuchar and Ben Ray Luján, is that it means well. The internet has been a key accelerant of widespread myths, misunderstandings and lies related to Covid-19; Klobuchar and Luján’s bill would force online companies like Facebook to crack down on false information during public health emergencies, or lose immunity from lawsuits if they don’t.

There’s only one problem: What is health misinformation? I know of no oracular source of truth about Covid-19. Scientific consensus has shifted dramatically during the pandemic, and even now experts are divided over important issues, such as whether everyone should get a vaccine booster shot. Klobuchar and Luján’s bill elides these complications. Instead they designate an all-knowing authority: Health misinformation, the bill says, is whatever the secretary of health and human services decides is health misinformation.

I’m sorry — what? Have the senators forgotten that just last year we had a president who ridiculed face masks and peddled ultraviolet light as a miracle cure for the virus? Why would we choose to empower such a president’s cabinet appointee as the arbiter of what’s true and false during a pandemic? And not just a pandemic — since the law defines a public health emergency so broadly, I wouldn’t put it past a science-averse future secretary from attempting to declare discussions about abortion, birth control, transgender health or whatever else as “misinformation.”

Klobuchar and Luján’s bill is one of many plans that attempt to curb the power of tech companies by altering Section 230 of the Communications Decency Act, the much-hated and much-misunderstood 1996 rule that affords websites broad immunity from liability for damage caused by their users. Proposals from Democratic lawmakers tend to call on tech companies to delete or demote false content in order to retain Section 230 immunity; proposals from Republicans generally do the opposite, threatening to undo immunity if tech companies censor content “unfairly” or “in bad faith.”

The plans from both sides fill me with deep dread. Many legal experts argue that many Section 230 proposals, including the Klobuchar-Luján bill, likely violate the First Amendment, which makes it extremely difficult for Congress to dictate to private companies and their users what people can and can’t say online. At best, then, the proposals to reform Section 230 might amount to little more than a performative gesture, a way for lawmakers to show they’re doing something, anything, about the runaway powers of tech giants. At worst, though, these plans may backfire catastrophically. Rather than curbing the influence of Big Tech, altering Section 230 might only further cement Facebook and other tech giants’ hold over public discourse — because the giants might be the only companies with enough resources to operate under rules in which sites can be inundated with lawsuits over what their users post. Smaller sites with fewer resources, meanwhile, would effectively be encouraged to police users’ content with a heavy hand. It is no accident that Facebook has been telling lawmakers that it welcomes reforms to Section 230 — while smaller sites like Etsy and Tripadvisor are nervous about the possibility.

Recent reports have exacerbated lawmakers’ impatience with Facebook. This week news organizations are running scores of stories based on documents leaked by Frances Haugen, the former Facebook employee turned whistle-blower. Haugen’s documents show a company out of control, one whose sense of ethics rarely rises above the bottom line, one ripe for regulation and reform.

“There’s so much hatred for Facebook right now that anything is possible,” said Jeff Kosseff, a professor of cybersecurity law at the United States Naval Academy and the author of a book about Section 230, “The Twenty-Six Words That Created the Internet.” Kosseff is most worried about a last-minute, dead-of-night change that undoes the governing law of the internet. “The worst possibility is that every proposal gets into one 500-page omnibus bill that gets passed right before people go home in December, and makes Section 230 entirely inoperable,” he told me.

Section 230 has been a punching bag for Democrats and Republicans for years. Last year Donald Trump, who argued that the law allowed liberal tech executives to censor right-wing ideas, issued an executive order aimed at limiting its scope. President Biden revoked that order in May, but he has also called for Section 230’s repeal. Both Trump and Biden are emblematic of a widespread misunderstanding about Section 230 — the idea that it is the rule that gives tech companies wide leeway to moderate online discussions.

In fact, it is the First Amendment that grants technology companies that right. As Daphne Keller, the director of the Program on Platform Regulation at Stanford’s Cyber Policy Center, has outlined, there are at least six different ways that the Constitution limits Congress’s power to regulate online discourse.

Among these limits: Congress can’t require companies to ban constitutionally protected speech — and objectionable as it might be, in America, health misinformation is legal speech, and it is not a crime for me to tell you where to stick your syringe.

In a recent academic article, Keller makes a convincing case that the Supreme Court’s First Amendment precedents also prevent Congress from telling tech companies not to amplify certain speech through recommendation algorithms like the one behind Facebook’s News Feed. Such a law would constitute a burden on speech, and the court has ruled that burdens on speech get the same scrutiny as bans on speech. Congress might even run afoul of the First Amendment just by merely incentivizing companies to maintain certain speech standards, Keller has argued.

Not everyone agrees that the Constitution is incompatible with speech regulations for tech companies. Lawrence Lessig, a professor at Harvard Law School who has been working with Haugen, the Facebook whistle-blower, told me that some content-neutral rules for online speech might survive constitutional scrutiny — for example, a rule that set an upper limit on the number of times a Facebook post could be reshared.

More broadly, Lessig argued that legal scholars of the digital world should begin to think more creatively about ways to tame social media. “We kind of stopped our thinking too early in the evolution of these technologies, and there’s a lot more thinking to be done,” he said.

Indeed, Kosseff, Lessig and Keller all agreed on one idea — that before hastily enacting new online speech laws, Congress ought to appoint a kind of blue-ribbon investigative commission with the power to compel tech giants to provide much more information about how their platforms work. Lawmakers would be much better equipped to decide what to do about online discourse if they understood how it operates now, they argued.

But of course, a commission is nobody’s idea of compelling politics. “It’s kind of unsatisfying,” Keller told me. I agree — but it’s better than moving haphazardly and making our problems much worse.

The New York Times