Joe Nocera
TT

Revoke Social Media’s Legal Shield, But for the Right Reason

Donald Trump has been undeniably good for Twitter Inc. In late 2015, when he announced that he was running for president, he was already a significant presence on the social media site, with more than 5 million followers. By last week, when Twitter finally banned him, he had almost 89 million followers. In the first three years of Trump’s presidency, meanwhile, Twitter’s revenue grew from $2.2 billion to $3.5 billion. Whether friend or foe, you had to join Twitter if you wanted to keep track of the US president.

Were some of Trump’s 56,000 tweets — as well as many of his Facebook posts — often incendiary, full of lies, threats, conspiracies, and insults? Of course they were. Did he regularly resort to the kind of hate speech that Twitter and Facebook insist they bar? Yes. Did he sometimes seem to be inciting violence? Without question.

So please excuse me if I’m a little cynical about the decision by Twitter — and Facebook — to finally boot Trump 12 days before his presidency ends, when he no longer has the ability to use the federal government to strike back, as he has so often done to companies that anger him. If Trump had won reelection, would the social media companies have bounced him? If he had instigated the attack on the U.S. Capitol, say, a year ago, would he have been banned? Unlikely.

The two tweets, posted on Friday, that were ostensibly the final straw for Twitter were actually pretty benign. One said he wouldn’t attend the inauguration; the other said that the 75 million people who voted for him “will have a GIANT VOICE long into the future.” Twitter’s rationale for why these two tweets incited violence was extremely weak. Coming two days after Trump’s supporters took over the Capitol, it had the feel of shutting the barn door long after the horse had left.

Many on Twitter, including a lot of journalists, applauded the ban, viewing it as a case of “better late than never”; after all, Trump had been violating the company’s terms of service for years. Trump supporters, starting with Donald Trump Jr., complained that Big Tech was trying to silence them and was violating their free-speech rights. Trump critics responded that the First Amendment applies only to government action, not that of private companies such as Twitter and Facebook.

Which, of course, is true. Twitter and Facebook have every legal right to allow — or disallow — whomever they want on their platforms. Similarly, Google and Apple can accept or reject apps as they see fit; indeed, over the weekend, the two companies cut off Parler, a right-wing social media platform that was suddenly flooded with incitements to violence, according to the two tech giants.

But consider: Do you really want Jack Dorsey, Mark Zuckerberg, Tim Cook, and Sundar Pichai deciding which speech is acceptable and which is not on their platforms — platforms that are now indistinguishable from the public space. In addition to the problem of having so much power concentrated in so few hands, they are simply not very good at it. Their rules are vague, change constantly, and are ignored often if the user is prominent enough.

It used to be acceptable to be a Holocaust denier on Facebook; then last year, Zuckerberg, saying his thinking “had evolved,” decided to ban Holocaust deniers. During the 2016 election, Twitter was filled with anti-Semitic tweets, which the company rarely removed. According to an article by Andrew Marantz in the New Yorker, Facebook has some 15,000 “content moderators” who are responsible for finding, and taking down, posts that violate Facebook’s terms of service. Given that Facebook has upward of 2.7 billion monthly active users, this would be an impossible task even if the company was serious about this mission — which Marantz doubts.

He quotes approvingly a Facebook critic who believes that “not even the most ingenious technocratic fix to Facebook’s guidelines can address the core problem: its content-moderation priorities won’t change until its algorithms stop amplifying whatever content is most enthralling or emotionally manipulative.” That’s the content that makes the company the most money — content like Trump’s.

In the last Congress, House Democrats — and some Republicans — made it clear that they have an appetite for curbing the monopolistic practices of Facebook and the other big technology companies. I am in wholehearted favor of new laws and tougher antitrust actions that would allow for more innovation and increased competition.

But even if Facebook is broken up, and Google becomes a regulated platform, it won’t curb the other power the tech companies have: the power to decide what is hate speech and what isn’t; what incites violence and what doesn’t; what speech should be allowed and what shouldn’t. As Alexey Navalny, Russia’s most prominent dissident, put it in a recent tweetstorm, “The ban [against Trump] on Twitter is a decision of people we don’t know in accordance with a procedure we don’t know.” This strikes me as unarguable — and a large part of the reason people don’t trust the decisions about speech that Twitter and Facebook make.

Navalny’s solution is to create a committee that would make such decisions with full transparency, including the ability to appeal any decision the committee makes. I doubt that would work — it certainly wouldn’t be able to operate quickly enough to remove hate speech in real time. And though this may betray my lack of imagination, I can’t conceive of how government could regulate the decisions of Facebook, Twitter et al.

Instead, I have come around to an idea that the right has been clamoring for — and which Trump tried unsuccessfully to get Congress to approve just weeks ago. Eliminate Section 230 of the Communications Decency Act of 1996. That is the provision that shields social media companies from legal liability for the content they publish — or, for that matter, block.

The right seems to believe that repealing Section 230 is some kind of deserved punishment for Twitter and Facebook for censoring conservative views. (This accusation doesn’t hold up upon scrutiny, but let’s leave that aside.) In fact, once the social media companies have to assume legal liability — not just for libel, but for inciting violence and so on — they will quickly change their algorithms to block anything remotely problematic. People would still be able to discuss politics, but they wouldn’t be able to hurl anti-Semitic slurs. Presidents and other officials could announce policies, but they wouldn’t be able to spin wild conspiracies.

Would this harm Facebook and Twitter’s business models? Sure it would. But so what? They have done the country a lot of harm, and it is clear they have no idea how to get their houses in order — and no real desire to, either. If they make less money but cause less damage to the country, it will be well worth it.

(Bloomberg)