I have a T-shirt that I have never put on because I don’t deserve to wear it. It says “Master of ’Metrics” on the back.
I got it in 2015 as a promotional tie-in with a review copy of a book on econometrics called “Mastering ’Metrics: The Path From Cause to Effect,” co-written by Joshua Angrist, who on Monday received the Nobel Memorial Prize in Economic Sciences along with David Card and Guido Imbens. To wear the T-shirt, one really ought to complete the book. I’m only on Page 85, so the T-shirt remains in the dresser.
That said, I am pretty excited by the awarding of the prize to Angrist, of the Massachusetts Institute of Technology; Card, of the University of California, Berkeley; and Imbens, of Stanford University. A lot of excellent articles about the Nobel have focused on how these scholars upset conventional economic wisdom on topics such as the minimum wage and immigration. I want to focus instead on the tools that the three developed. These tools are powerful yet easily graspable, like a good pair of pliers.
The problem that econometrics deals with is that correlation does not imply causation. Just because you wore mismatched socks to a job interview and didn’t get the job doesn’t prove the hypothesis that the wardrobe malfunction was what killed your chances. And you can’t test the hypothesis by rerunning the interview with matched socks.
Economists call this “the fundamental problem of causal inference.” Luckily, there’s a way around it. While it’s impossible to rewind the clock to observe both possibilities for a single individual (interview with matched socks vs. interview with unmatched socks), it’s possible to find the average effect by doing experiments on multiple people. We will never know for sure if taking an aspirin is what cured your headache, but we can measure the average effect of aspirin across thousands of headache sufferers who did or didn’t take a tablet.
Sometimes economists can run proper experiments, where certain randomly chosen people are “treated” (experimented on) and the rest serve as a “control” group. The 2019 Nobel in economics went to Abhijit Banerjee, Esther Duflo and Michael Kremer for such experiments, which were aimed at alleviating global poverty. More often, though, proper experiments are impossible. You can’t randomly assign certain people to be smokers or drop out of college, for instance. As a fallback, economists look for “natural experiments”: real-life situations that, because of a quirk of nature or government policy or some other source, resemble designed experiments.
Card, Angrist and Imbens are clever at identifying and learning from natural experiments. Card and his fellow economist Alan Krueger famously exploited a variation in the state minimum wage between New Jersey and Pennsylvania to see whether raising the minimum wage kills jobs. Fast-food restaurants on either side of the border between New Jersey and eastern Pennsylvania were similar in every important respect except how much they had to pay workers, since New Jersey had raised its minimum wage. Contrary to accepted wisdom, the economists found “no indication that the rise in the minimum wage reduced employment.”
If Card and Krueger had looked only at employment in New Jersey, they would have had trouble disentangling the effect of the higher minimum wage from the effect of seasonal changes in fast-food employment. So they exploited the fact that seasonal effects in eastern Pennsylvania are similar to those in New Jersey, effectively using Pennsylvania as the “control” group.
That’s one example of an ingenious tool that this year's Nobel laureates advanced. Here’s another:
Let’s say you want to figure out the effect of serving in the military during the Vietnam War on earnings later in life. It’s not enough to compare lifetime wages of people who did and didn’t serve, because they might be systematically different from each other in other hard-to-detect ways. For example, what if people who didn’t serve tended to come from wealthier families?
In a 1990 paper that looked at the relationship between military service during the Vietnam War and later-life earnings, Angrist came up with a technique to get around the problem: He focused on a person’s draft lottery number. Having a low lottery number increased the likelihood of serving in the military, and there was no risk that people who drew low numbers were systematically different from people who drew high ones, because the lottery numbers were assigned at random.
Angrist recognized that this approach wasn’t perfect. A lot of those who served in the military during the Vietnam War were volunteers, which meant that they would have served even if they had high lottery numbers. Conversely, some who had low lottery numbers didn’t serve, in some cases because they qualified as conscientious objectors.
But Angrist, with Imbens, figured out how to make some reliable inferences even when the natural experiment was muddied. In the case of the draft, Angrist showed that he could clear away the mud by zeroing in on the influence of the draft number — the natural experiment — on whether a man served, ignoring other factors. He found that it’s possible to draw useful conclusions about the men who served because they were drafted, but impossible to conclude anything useful about the men who served because they volunteered. He found that “in the early 1980s, long after their service in Vietnam was ended, the earnings of white veterans were approximately 15 percent less than the earnings of comparable nonveterans.”
The beauty of the Nobelists’ work is that it’s about the real world. Finding fruitful natural experiments requires not just cleverness but a deep understanding of the phenomenon being studied.
This week I interviewed Paul Romer, who was awarded the 2018 Nobel in economics for his work on growth theory. In a 2015 paper, he harshly criticized fellow economists for what he called “mathiness,” which he defined as using the language of mathematics but in a sloppy way that “leaves ample room for slippage.” That’s not a problem with this year’s laureates, who used math appropriately, he told me.
“There’s been a reaction in the profession away from theory and toward much more attention to the facts,” he said. “If you take it seriously you have to take seriously the follow-up question: Can I interpret these correlations as telling me something about causation?”
Romer identified this approach as “the real heart” of what this year’s laureates have been doing in the work.
I asked Romer if he thought his 2015 “mathiness” critique might have nudged the profession and the Nobel committee toward the kind of work honored this year. He laughed, noting that that’s exactly the kind of question that the fundamental problem of causal inference says is impossible to answer. Nevertheless, he said he feels the profession is on a better track.
If you want to learn more about this research, two good resources are the Nobel website and a series of online videos featuring Angrist at the online economics resource Marginal Revolution University. Now I need to finish Angrist’s book so I can wear that T-shirt.
The New York Times