Parmy Olson
TT

Facebook’s Greater Threat Is the Law, Not Lawsuits

Meta Platforms Inc. has become a lightning rod for legal challenges in the US, from the FTC’s antitrust case to shareholder lawsuits alleging the company misled investors. Last week, eight complaints were filed against the company across the US, including allegations that young people who frequently visited Instagram and Facebook went on to commit suicide and experience eating disorders. (Facebook has not commented on the litigation, and has denied allegations in the FTC and shareholder complaints.)

The allegations echo the concerns of Facebook whistleblower Frances Haugen, whose leak last year of thousands of internal documents showed that Meta was aware of the psychological harms its algorithms caused users, such as, for instance, that Instagram made body issues worse for one in three teen girls.

While the lawsuits strike at the heart of Meta’s noxious social impact and could help educate the public on the details, they likely won’t force significant change at Facebook. That’s because Section 230 of the Communications Decency Act of 1996 shields Facebook and other internet companies from liability for much of what their users post. Unless US law changes — and there are no signs this is happening soon — Meta’s lawyers can continue to use that defense.

But that won’t be the case in Europe. Two new laws coming down the pipe promise to change how Meta’s algorithms show content to its 3 billion users. The UK’s Online Safety Bill, which could come into force next year, and the European Union’s Digital Services Act, likely coming into force in 2024, are both aimed at preventing psychological harms from social platforms. They’ll force large internet companies to share information about their algorithms to regulators, who will assess how “risky” they are.

Mark Scott, chief technology correspondent with Politico and a close follower of those laws, answered questions about how they’d work, as well as what the limitations are, on Twitter Spaces with me last Wednesday. Our discussion is edited below.

Parmy Olson: What are the main differences between the upcoming UK and EU laws on online content?

Mark Scott: The EU law is tackling legal but nasty content, like trolling, disinformation and misinformation, and trying to balance that with freedom of speech. Instead of banning [that content] outright, the EU will ask platforms to keep tabs on it, conduct internal risk assessments and provide better data access for outside researchers.

The UK law will be maybe 80% similar, with the same ban on harmful content and requirement for risk assessments, it but will go one step further: Facebook, Twitter and others will also be legally required to have a “duty of care” to their users, meaning they will have to take action against harmful but legal material.

Parmy: So to be clear, the EU law won’t require technology companies to take action against the harmful content itself?

Mark: Exactly. What they’re requiring is to flag it. They won’t require the platforms to ban it outright.

Parmy: Would you say the UK approach is more aggressive?

Mark: It’s more aggressive in terms of actions required by companies. [The UK] has also floated potential criminal sentences for tech executives who don’t follow these rules.

Parmy: What will risk assessments mean in practice? Will engineers from Facebook have regular meetings to share their code with representatives from [UK communications regulator] Ofcom or EU officials?

Mark: They will have to show their homework to the regulators and to the wider world. So journalists or civil society groups can also look and say, “OK, a powerful, left-leaning politician in a European country is gaining mass traction. Why is that? What is the risk assessment the company has done to ensure [the politician’s] content doesn’t get blown out of proportion in a way that might harm democracy?” It’s that type of boring but important work that this going to be focused on.

Parmy: Who will do the auditing?

Mark: The risk assessments will be done both internally and with independent auditors, like the Price Waterhouse Coopers and Accentures of this world, or more niche, independent auditors who can say, “Facebook, this is your risk assessment, and we approve.” And then that will be overseen by the regulators. The UK regulator Ofcom is hiring around 400 or 500 more people to do this heavy lifting.

Parmy: What will social-media companies actually do differently, though? Because they already put out regular “transparency reports” and they have made efforts to clean up their platforms — YouTube has demonetized problematic influencers and the QAnon conspiracy theory isn’t showing up in Facebook Newsfeeds anymore.

Will the risk assessments lead tech companies to take down more problem content as it comes up? Will they get faster at it? Or will they make sweeping changes to their recommendation engines?

Mark: You’re right, the companies have taken significant steps to remove the worst of the worst. But the problem is that we have to take the company’s word for it. When Francis Haugen made internal Facebook documents public, she showed things that we never knew about the system before, such as the algorithmic amplification of harmful material in certain countries. So both the UK and the EU want to codify some of the existing practices from these companies, but also make them more public. To say to YouTube, “You’re doing X, Y, and Z to stop this material from spreading. Show me, don’t tell me.”

Parmy: So essentially what these laws will do is create more Francis Haugens, except instead of creating more whistleblowers you have auditors coming in and just getting the same kind of information. Would Facebook, YouTube and Twitter make the resultant changes globally, like they did with Europe’s GDPR privacy rules, or just for European users?

Mark: I think the companies will likely say they are making this global.

Parmy: You talked about tech platforms showing their homework with these risk assessments. Do you think they’ll honestly share what kind of risks their algorithms could cause?

Mark: That’s a very valid point. It'll come down to the power and expertise of the regulators to enforce this. It’s also going to be a lot of trial and error. It took about four years to ease out the bumps for Europe’s GDPR privacy rules to take action. I think as the regulators get a better understanding of how these companies work internally, they’ll know where to look better. I think initially, it won’t be very good.

Parmy: Which law will do a better job of enforcement?

Mark: The UK bill is going to get watered down between now and next year, when it will hopefully come into play. This means the UK regulator will have these quasi-defined powers, and then the rug will be pulled out from underneath them for political reasons. The Brits have been very wishy-washy in terms of how they’re going to define “legal but harmful” [content that must be taken down]. The Brits have also made exceptions for politicians, but as we’ve seen most recently in the United States, some politicians are the ones purveying some of the worst mistruths to the public. So there are some big holes that need to be filled.

Parmy: What do these laws get right, and what do they get wrong?

Mark: The idea of focusing on risk assessments is I think the best way to go. Where they’ve gone wrong is the over-optimistic sense that they can actually fix the problem. Disinformation and politically divisive material was around way before social media. The idea that you can create some sort of bespoke social-media law to fix that problem without fixing the underlying cultural and societal issues that go back decades, if not centuries, is a bit myopic. I think [British and EU] politicians have been very quick and eager to say, “Look at us, we're fixing it.” Whereas I don’t think they’ve been clear on what they’re fixing and what result they're looking for.

Parmy: Is framing these laws as being about risk assessments a clever way to protect free speech, or disingenuous?

Mark: I don’t have a clear answer for you. But I think the way of targeting risk assessments, and mitigating those risks as much as possible, that’s the way to go. We’re not gonna get rid of this, but we can at least be honest and say, “This is where we see problems and this is how we’re gonna fix them.” The specificity is missing, which provides a lot of gray space where legal fights can continue, but I also think that’s going to come in the next five years as the legal cases get fought, and we’ll get a better sense of exactly how these rules will work.

Bloomberg