Frances Haugen
TT

Europe Is Making Social Media Better Without Curtailing Free Speech. The US Should, Too.

Elon Musk’s deal to take Twitter private, which has spurred questions about power, censorship and safety for the future of the platform, happened just days after the European Union reached a landmark agreement to make social media less toxic for users. The new E.U. standards, and the ethic of transparency on which they are based, will for the first time pull back the curtain on the algorithms that choose what we see and when we see it in our feeds.

In Europe’s case, the dryly named Digital Services Act is the most significant piece of social media legislation in history. It goes to the heart of what I’ve tried to do as a whistle-blower who worked inside Facebook: make social media far better without impinging on free speech. Today, Facebook’s poorly implemented content moderation strategies leave those most at risk of real world violence unprotected and consistently succeed at only one thing: angering everyone.

Last October, I came forward with a simple message: Facebook knew it was cutting corners to make more money, and the public was paying the price. In over 20,000 pages of documents that I disclosed to the Securities and Exchange Commission and to Congress, the public learned what Facebook already knew — its products were spurring hate and division, leading teenagers into rabbit holes of self-harm and anorexia, leaving millions of users without basic safety systems for hate speech or violence incitement and, at times, were even used to sell humans across the platform.

Global companies had chosen profit-maximizing strategies at the expense of the public interest before. We’ve seen it with pollution in the chemical industry, environmental damage in natural resource extraction and predatory mortgages in financial services.

What distinguishes the bad practices of these other industries from Big Tech is simple — there are laws holding them accountable. That’s what government is intended to do in democratic capitalism: use the law to steer the market back into alignment with the public interest. When concentrated monopolistic power privileges the few over the many and distorts how the free market operates, this kind of correction is vital.

How the new European law is carried out will be just as important as passing it. It is a broad and comprehensive set of rules and standards, not unlike food safety standards for cleanliness and allergen labeling. But what is also remarkable about it is that it focuses on oversight of the design and implementation of systems (like how algorithms behave) rather than determining what is good or bad speech.

The law requires that Facebook and other large social platforms be transparent about what content is being amplified and shared virally across the platform. And it must apply consumer protections to features that, among other things, spy on users, addict kids or weaken public safety. With transparency finally required, it will be easier for European regulators and civil society to verify that companies are following the rules.

These rules are like systems in the United States that compel pharmaceutical companies to keep drugs safe and to allow the Food and Drug Administration to independently verify the results. Most people aren’t aware of them, but we’re all glad they are there.

The new requirement for access to data will allow independent research into the impact of social media products on public health and welfare. For example, Facebook, Instagram and others will have to open up the black box of which pages, posts and videos get the most likes and shares — shining light on the outcomes of the algorithms.

This will allow thousands more people, not just those who work at these companies, to address the complex problems of how information markets change social outcomes. As an algorithmic specialist and data scientist, I’m most excited by this. No longer will we depend on taking the companies’ word for it when they say they are trying to fix a safety problem. Democratic and investor accountability and oversight of big companies boils down to whether we can accurately diagnose the problems their products are causing, devise solutions and verify that the industry is actually following through with them. The era of “just trust us” is over.

Why did this happen in Europe? Why not right here in America, which birthed these incredible technologies? Europe knows Facebook’s censorship strategies fail societies where many languages are spoken because they require censorship systems to be built one language at a time. Only the strategy of focusing on product safety works equitably in every language, even less-spoken ones.

Europe is approving changes Congress has been trying to secure — with a slate of bipartisan bills — for several years. But, in the United States, Facebook’s and Instagram’s owner, Meta, invests heavily in lobbyists and communications specialists in response to concerns about hate speech, conspiracy theories and misinformation.

The industry has falsely framed the way forward as a choice between free speech and safety. Meta claims it would love for everyone to be safe, but that safety would come at the cost of free speech. The documents in my disclosures paint a different picture: Meta knows that the product choices it’s made give the most reach to the most divisive and extreme ideas, and it knows how to unwind those choices to prioritize having human judgment direct our attention instead of just computers. Ideas include cracking down on robots that amplify disinformation, requiring users to click a link before resharing it, or helping more intentionally drive the distribution of information by having users copy/paste content shared outside friends of friends. These are product choices that can reduce hate speech, harmful content and misinformation.

So why hasn’t Facebook fully implemented them? These changes add friction and slightly delay the spread of content, which also means slightly slowing down the growth of Facebook’s profits. Facebook’s laser focus on quarterly returns has stolen an opportunity to build for long-term success; we’re more likely to be using Facebook 10 years from now if it’s safe and enjoyable to use. Arguing over censorship works only to further Facebook’s self-interest — while also wrapping our friends, neighbors and legislators into angry knots that are impossible to untie.

Let me be clear: Censorship is not the solution. We can have social media that connects us to our friends and family and that doesn’t divide us from our fellow countrymen.

Europe has laid out a path that we can adapt — in our uniquely American way — and follow.

The New York Times