Cathy O'Neil
TT

Facebook’s Algorithms Are Too Big to Fix

This week’s Congressional testimony by whistleblower Frances Haugen drove home an important message: Facebook is actively harming millions, perhaps billions, of users around the world with a host of algorithms designed to boost engagement and advertising revenue.

This leaves the question: what should be done? The sheer size and complexity of the task precludes a simple answer. But as someone who makes a living auditing algorithms and seeking to limit the damage they can do, I have some ideas.

When I take on a job, I first consider whom the algorithm affects. The stakeholders of an exam-grading algorithm, for example, might include students, teachers and schools, as well as subgroups defined by race, gender and income. Usually there’s a tractable number of categories, like 10 or 12. Then I develop statistical tests to see if the algorithm is treating or is likely to treat any groups unfairly — is it biased against Black or poor students, or against schools in certain neighborhoods? Finally, I suggest ways to mitigate or eliminate those harms.

Unfortunately, this approach is difficult to apply to the Facebook newsfeed algorithm — or, for that matter, to the algorithms underlying just about any large social network or search engine such as Google. They’re just too big. The list of potential stakeholders is endless. The audit would never be complete, and would invariably miss something important. I can’t imagine, for example, that an auditor could have reasonably anticipated in 2016 how Facebook would become a tool for genocide in Myanmar, and developed a way to head off the spread of misinformation about the country’s Muslim minority. This is why I’ve long said that fixing the Facebook algorithm is a job I would never take on.

That said, with the right kind of data, authorities can seek to address specific harms. Suppose the Federal Trade Commission made a list of outcomes it wants to prevent. These might include self-harm among teen girls, radicalization of the type that led to the Capitol riots, and undermining trust in electoral processes. It could then order Facebook to provide the data needed to test whether its algorithms are contributing to those outcomes — for example, by seeking causal connections between certain types of posts and young female users’ reported concerns about body image. To provide a robust picture, there should be multiple measures of each phenomenon, with daily or weekly updates.

Under such a system, Facebook would be free to go about its business as it sees fit. There would be no need to amend Section 230 of the Communications Decency Act to make the company edit or censor what’s published on its network. But the FTC would have the evidence it needed to hold the company liable for the effects of its algorithms — for the consequences of its efforts to keep people engaged and consuming. This, in turn, could help compel Facebook itself to act more responsibly.

Facebook has had ample opportunity to get its act together. It won’t do it on its own. The limited monitoring I propose is far from perfect, but it’s a way to get a foot in the door and at least start to hold big tech accountable.

Bloomberg