Tetlock and the Taliban How a humiliating military loss proves that so much of our so-called "expertise" is fake, and the case against specialization and intellectual diversity
Richard Hanania
Note: Apologies to Phil Tetlock if he doesn't want to be associated with the Taliban, I just couldn't resist the alliteration. Also apologies to the Taliban if they don't want to be associated with an American academic, though I assure them that Phil is one of the good ones.
Imagine that the US was competing in a space race with some third world country, say Zambia, for whatever reason. Americans of course would have orders of magnitude more money to throw at the problem, and the most respected aerospace engineers in the world, with degrees from the best universities and publications in the top journals. Zambia would have none of this. What should our reaction be if, after a decade, Zambia had made more progress?
Obviously, it would call into question the entire field of aerospace engineering. What good were all those Google Scholar pages filled with thousands of citations, all the knowledge gained from our labs and universities, if Western science gets outcompeted by the third world?
For all that has been said about Afghanistan, no one has noticed that this is precisely what just happened to political science. The American-led coalition had countless experts with backgrounds pertaining to every part of the mission on their side: people who had done their dissertations on topics like state building, terrorism, military-civilian relations, and gender in the military. General David Petraeus, who helped sell Obama on the troop surge that made everything in Afghanistan worse, earned a PhD from Princeton and was supposedly an expert in “counterinsurgency theory.” Ashraf Ghani, the just deposed president of the country, has a PhD in anthropology from Columbia and is the co-author of a book literally called Fixing Failed States. This was his territory. It’s as if Wernher von Braun had been given all the resources in the world to run a space program and had been beaten to the moon by an African witch doctor.
Meanwhile, the Taliban did not have a Western PhD among them. Their leadership was highly selected though. As Ahmed Rashid notes in his book The Taliban, in February 1999, the school that provided the leadership for the movement “had a staggering 15,000 applicants for some 400 new places making it the most popular madrassa in northern Pakistan." Yet they certainly didn’t publish in or read the top political science journals. Consider this a data point in the question of whether intelligence or subject-matter expertise is more important.
Is the moon shot analogy fair? I think it probably strikes many people as odd, but I don’t see why it should. Surely, there were many political scientists who thought what the US was trying to do in Afghanistan given the resources invested was impossible, me among them, and maybe it’s simply the “experts” who were hired by NGOs, think tanks, and the US government that were delusional.
Yet I wonder what the field of civil engineering would say if the US went abroad and tried to build bridges based on principles that violated the laws of physics. I’d like to think the Pentagon would have trouble finding well-credentialed experts to help them, and those that did take a paycheck to help achieve the impossible would lose all credibility in their field. That of course has not happened to the pundits and social scientists who spent 20 years making a living off the idea that the US was doing something reasonable in Afghanistan.
Tetlock’s Discovery
Phil Tetlock’s work on experts is one of those things that gets a lot of attention, but still manages to be underrated. In his 2005 Expert Political Judgment: How Good Is It? How Can We Know?, he found that the forecasting abilities of subject-matter experts were no better than educated laymen when it came to predicting geopolitical events and economic outcomes. As Bryan Caplan points out, we shouldn’t exaggerate the results here and provide too much fodder for populists; the questions asked were chosen for their difficulty, and the experts were being compared to laymen who nonetheless had met some threshold of education and competence.
At the same time, we shouldn’t put too little emphasis on the results either. They show that “expertise” as we understand it is largely fake. Should you listen to epidemiologists or economists when it comes to COVID-19? Conventional wisdom says “trust the experts.” The lesson of Tetlock (and the Afghanistan War), is that while you certainly shouldn’t be getting all your information from your uncle’s Facebook Wall, there is no reason to start with a strong prior that people with medical degrees know more than any intelligent person who honestly looks at the available data.
I have a PhD in political science with a focus on international relations. Most people in my position would tell you that you should give my opinions on my topic of expertise more weight because of my credentials. I believe if anything, you should hold my degree against me, as getting a PhD is probably the most inefficient way to understand a topic, and a person seeking that credential has shown that they don’t understand that. I think I’ve been right on Afghanistan and other American interventions because of good intellectual habits, including a genuine concern with what is true. But that has little to do with any training I got from political science.
I think one of the most interesting articles of the COVID era was a piece called “Beware of Facts Man” by Annie Lowrey, published in The Atlantic.
What does he serve up there? Truth. Facts. The overlooked and the undercovered. The unvarnished and obvious conclusions that the media do not want you to believe. The conclusions that the social-justice warriors and sheeple professors will not let you reach. The conclusions that mere mortals, including lauded subject-matter experts and the people who have actual lived experience of the topic at hand, have not yet grasped…
He—and he is almost always a he—is a venture capitalist who has analyzed the hospitalizations data! He is a growth hacker with a piercing view of race and measures of intelligence! He is an industry analyst with insight into viral spread! He is a lawyer exploding nuances of gender and sex!
The Facts Man gives it to you straight. With his college degree, with his top-quality résumé, with his insider knowledge, with his background in euclidean something-or-other—sharpened by debating with the smartest people, who never went to school—here is what he has found. These are the data. These are more data. This. Is. It. Here’s the inevitable conclusion. It’s the only conclusion possible!…
Facts Man is Science Facts Man, he is adjacent to science, so he understands science better than scientists. He has credentials that let him look at the data and see them, instead of looking at the data and just looking at them. Or looking at them and interpreting them however your field interprets them. Or looking at them and waiting for them to be interpreted in the press. Science Facts Man operates without the encumbrances of peer review or any sense of the complexities endemic to many scientific fields. That is what he brings to the debate!
The reaction to this piece was something along the lines of “ha ha, look at this liberal who hates facts.” But there’s a serious argument under the snark, and it’s that you should trust credentials over Facts Man and his amateurish takes. In recent days, a 2019 paper on “Epistemic Trespassing” has been making the rounds on Twitter. The theory that specialization is important is not on its face absurd, and probably strikes most people as natural. In the hard sciences and other places where social desirability bias and partisanship have less of a role to play, it’s probably a safe assumption. In fact, academia is in many ways premised on the idea, as we have experts in “labor economics,” “state capacity,” “epidemiology,” etc. instead of just having a world where we select the smartest people and tell them to work on the most important questions.
But what Tetlock did was test this hypothesis directly in the social sciences, and he found that subject-matter experts and Facts Man basically tied. As he writes in his book,
…collapsing across all judgments, experts on their home turf made neither better calibrated nor more discriminating forecasts than did dilettante trespassers… at each level along the subjective probability scale from zero to 1.0, expert and dilettante calibration curves were strikingly similar. People who devoted years of arduous study to a topic were as hard-pressed as colleagues casually dropping in from other fields to affix realistic probabilities to possible futures.
Some Facts Men on the internet are certain to be cranks and worse than any experts, but you should always take good intellectual habits over credentials. A full treatment of what “good intellectual habits” means is beyond the scope of this essay but, as indicated above, any list of traits should start with an individual being intelligent, high on cost-benefit analysis and low on social desirability bias. If Facts Man is smart, he knows what he’s talking about precisely because he’s a jerk; if not, he’s just annoying without having the benefit of being right.
Interestingly, one of the best defenses of “Facts Man” during the COVID era was written by Annie Lowrey’s husband, Ezra Klein. His April 2021 piece in The New York Times showed how economist Alex Tabarrok had consistently disagreed with the medical establishment throughout the pandemic, and was always right. You have the “Credentials vs. Facts Man” debate within one elite media couple. If this was a movie they would’ve switched the genders, but since this is real life, stereotypes are confirmed and the husband and wife take the positions you would expect.
How I Stopped Believing in Academia, and Expertise
I decided to get a PhD in political science because I was interested in history and wanted to understand issues like the causes of armed conflict and why it has declined. Pretty early on, I came to realize that I could not get published in top journals by exploring large, important questions, at least using methods that I could trust. In international relations, there is one highly prestigious journal that deals with the “big questions,” and that’s called International Security (IS). It puts less emphasis on fancy stats, and more on historical research and how important a topic is. This is the kind of International Relations (IR) most people run into in the mass media, and people like Kennan, Kissinger, and Mearsheimer are in this tradition. But IS is basically your only option for doing this kind of work among top level journals, and space is limited. Once, I had two reviewers recommend a paper be publish there, but the editor arbitrarily decided to reject, and I knew I had no good fallback option.
Meanwhile, the top political science journals, namely the “Big 3” of the American Political Science Review (APSR), the Journal of Politics (JoP), and the American Journal of Political Science (AJPS), will take IR papers, but they put more emphasis on having the best methods. Of course, I have to point out here these “best methods,” are often highly flawed and rest on assumptions that are arbitrary, as Philippe Lemoine has shown in reviewing work related to COVID-19. His posts on this are long, but they need to be, because the methods are so fancy that it takes a really long time to even explain what scholars are doing before you even get to the point where you show their research is nonsense. But trust me, if you can follow along, it’s worth it.
So I didn’t really believe in the fancy methods necessary to get in the top journals. One thing I could do, however, was conduct research on public opinion on foreign policy. That was the topic of my dissertation, and I got two publications in decent journals out of it, in addition to a revise and resubmit that ultimately ended in rejection at APSR.
In the end, I don’t think my dissertation contributed much to human knowledge, making it no different than the vast majority of dissertations that have been written throughout history. The main reason is that most of the time public opinion doesn’t really matter in foreign policy. People generally aren’t paying attention, and the vast majority of decisions are made out of public sight. How many Americans know or care that North Macedonia and Montenegro joined NATO in the last few years? Most of the time, elites do what they want, influenced by their own ideological commitments and powerful lobby groups. In times of crisis, when people do pay attention, they can be manipulated pretty easily by the media or other partisan sources.
If public opinion doesn’t matter in foreign policy, why is there so much study of public opinion and foreign policy? There’s a saying in academia that “instead of measuring what we value, we value what we can measure.” It’s easy to do public opinion polls and survey experiments, as you can derive a hypothesis, get an answer, and make it look sciency in charts and graphs. To show that your results have relevance to the real world, you cite some papers that supposedly find that public opinion matters, maybe including one based on a regression showing that under very specific conditions foreign policy determined the results of an election, and maybe it’s well done and maybe not, but again, as long as you put the words together and the citations in the right format nobody has time to check any of this. The people conducting peer review on your work will be those who have already decided to study the topic, so you couldn’t find a more biased referee if you tried.
Thus, to be an IR scholar, the two main options are you can either use statistical methods that don’t work, or actually find answers to questions, but those questions are so narrow that they have no real world impact or relevance. A smaller portion of academics in the field just produce postmodern-generator style garbage, hence “feminist theories of IR.” You can also build game theoretic models that, like the statistical work in the field, are based on a thousand assumptions that are probably false and no one will ever check. The older tradition of Kennan and Mearsheimer is better and more accessible than what has come lately, but the field is moving away from that and, like a lot of things, towards scientism and identity politics.
Academics call the older tradition “qualitative work,” which is just a fancy way of saying non-statistical, meaning you read history and try to get some lessons. Some people who have done interesting things in the field and are worth reading are John Mueller, Robert Pape, Josh Shifrinson, David Kang, and my old advisers Robert Jervis and Marc Trachtenberg. Even among these scholars, who are generally careful and insightful, I think the writing is more turgid and cluttered than it needs to be, which I think academics have to do in order to justify their work as “real science.” Yet I don’t think IR has produced large and generalizable theories, nor tools that give someone who has gotten a PhD in the subject many advantages over those who have never studied it.
At some point, I decided that if I wanted to study and understand important questions, and do so in a way that was accessible to others, I’d have a better chance outside of the academy. Sometimes people thinking about an academic career reach out to me, and ask for advice. For people who want to go into the social sciences, I always tell them not to do it. If you have something to say, take it to Substack, or CSPI, or whatever. If it’s actually important and interesting enough to get anyone’s attention, you’ll be able to find funding.
If you think your topic of interest is too esoteric to find an audience, know that my friend Razib Khan, who writes about the Mongol empire, Y-chromosomes and haplotypes and such, makes a living doing this. If you want to be an experimental physicist, this advice probably doesn’t apply, and you need lab mates, major funding sources, etc. If you just want to collect and analyze data in a way that can be done without institutional support, run away from the university system.
The main problem with academia is not just the political bias, although that’s another reason to do something else with your life. It’s the entire concept of specialization, which holds that you need some secret tools or methods to understand what we call “political science” or “sociology,” and that these fields have boundaries between them that should be respected in the first place. Quantitative methods are helpful and can be applied widely, but in learning stats there are steep diminishing returns.
Part of the reason that specialization is bad is because people have to justify the existence of the field itself. As a reader wrote to the Marginal Revolution blog, ethicists have some terrible takes because to get published, they need to be original. Simply saying “do the thing that will save lives and not involve any coercion,” as would have been the case with human challenge trials, would not cut it. Now that we have effective vaccines for COVID-19, it’s the rare epidemiologists who will say “Thanks, our job is done.” I’m convinced this is what happened with much of IR: because we can use statistical methods on data, the methods must be suited to the task, and because we can measure public opinion, what the average citizen thinks about NATO expansion must have an actual effect on policy. Lemoine’s work on epidemiological models is pretty much the same thing; we have these tools, they must explain something real about the world. If your audience is Substack or the readers of The Atlantic, they are less likely to buy into your fancy stats or irrelevant public opinion surveys as important and insightful, because they don’t have a stake in the field the way peer reviewers do.
This discussion has been centered on my own experience. But from talking to people and what I’ve seen in other fields, much of academia is like this, and I am not surprised that all the expertise that the Pentagon and the State Department could gather did not translate into a successful outcome in Afghanistan.
Fake Expertise is Everywhere
Outside of political science, are there other fields that have their own equivalents of “African witch doctor beats von Braun to the moon” or “the Taliban beats the State Department and the Pentagon” facts to explain? Yes, and here are just a few examples.
Consider criminology. More people are studying how to keep us safe from other humans than at any other point in history. But here’s the US murder rate between 1960 and 2018, not including the large uptick since then.
So basically, after a rough couple of decades, we’re back to where we were in 1960. But we’re actually much worse, because improvements in medical technology are keeping a lot of people that would’ve died 60 years ago alive. One paper from 2002 says that the murder rate would be 5 times higher if not for medical developments since 1960. I don’t know how much to trust this, but it’s surely true that we’ve made some medical progress since that time, and doctors have been getting a lot of experience from all the shooting victims they have treated over the decades. Moreover, we’re much richer than we were in 1960, and I’m sure spending on public safety has increased. With all that, we are now about tied with where we were almost three-quarters of a century ago, a massive failure.
What about psychology? As of 2016, there were 106,000 licensed psychologists in the US. I wish I could find data to compare to previous eras, but I don’t think anyone will argue against the idea that we have more mental health professionals and research psychologists than ever before. Are we getting mentally healthier? Here’s suicides in the US from 1981 to 2016 (Update,.
Note that psychology has done this with the aid of pharmaceutical drugs, some of which undoubtedly work. Subtract those, and the picture might be much worse.
What about education? I’ll just defer to Freddie deBoer’s recent post on the topic, and Scott Alexander on how absurd the whole thing is.
Maybe there have been larger cultural and economic forces that it would be unfair to blame criminology, psychology, and education for. Despite no evidence we’re getting better at fighting crime, curing mental problems, or educating children, maybe other things have happened that have outweighed our gains in knowledge. Perhaps the experts are holding up the world on their shoulders, and if we hadn’t produced so many specialists over the years, thrown so much money at them, and gotten them to produce so many peer reviews papers, we’d see Middle Ages-levels of violence all across the country and no longer even be able to teach children to read. Like an Ayn Rand novel, if you just replaced the business tycoons with those whose work has withstood peer review.
Or you can just assume that expertise in these fields is fake. Even if there are some people doing good work, either they are outnumbered by those adding nothing or even subtracting from what we know, or our newly gained understanding is not being translated into better policies. Considering the extent to which government relies on experts, if the experts with power are doing things that are not defensible given the consensus in their fields, the larger community should make this known and shun those who are getting the policy questions so wrong. As in the case of the Afghanistan War, this has not happened, and those who fail in the policy world are still well regarded in their larger intellectual community.
The Case against Intellectual Diversity
Few people conduct serious study of epistemology, but anyone who seeks a broad understanding of the world must proceed with heuristics about how to obtain and process information. Trump-era liberalism has settled on “trust the experts,” elevating subject-matter expertise. It may actually not be the worst possible thing for the general public, which doesn’t have the time or ability to compare and contrast the views of Alex Tabarrok with those of Anthony Fauci, or tell the difference between a “non-expert” like Tabarrok and one like Alex Berenson, also known as “the pandemic’s wrongest man.”
This is the dilemma then. “Trust the experts” can lead the public astray, and so does “don’t trust the experts.” While that’s an issue for most people, for intellectuals the problem is more soluble. When you don’t have the time to research something for yourself, what you should do is trust those who have good intellectual habits.
Those opposed to cancel culture have taken up the mantle of “intellectual diversity” as a heuristic, but there’s nothing valuable about the concept itself. When I look at the people I’ve come to trust, they are diverse on some measures, but extremely homogenous on others. IQ and sensitivity to cost-benefit considerations seem to me to be unambiguous goods in figuring out what is true or what should be done in a policy area. You don’t add much to your understanding of the world by finding those with low IQs who can’t do cost-benefit analysis and adding them to the conversation.
One of the clearest examples of bias in academia and how intellectual diversity can make the conversation better is the work of Lee Jussim on stereotypes. Basically, a bunch of liberal academics went around saying “Conservatives believe in differences between groups, isn’t that terrible!” Lee Jussim, as someone who is relatively moderate, came along and said “Hey, let’s check to see whether they’re true!” This story is now used to make the case for intellectual diversity in the social sciences.
Yet it seems to me that isn’t the real lesson here. Imagine if, instead of Jussim coming forward and asking whether stereotypes are accurate, Osama bin Laden had decided to become a psychologist. He’d say “The problem with your research on stereotypes is that you do not praise Allah the all merciful at the beginning of all your papers.” If you added more feminist voices, they’d say something like “This research is problematic because it’s all done by men.” Neither of these perspectives contributes all that much. You’ve made the conversation more diverse, but dumber. The problem with psychology was a very specific one, in that liberals are particularly bad at recognizing obvious facts about race and sex. So yes, in that case the field could use more conservatives, not “more intellectual diversity,” which could just as easily make the field worse as make it better. And just because political psychology could use more conservative representation when discussing stereotypes doesn’t mean those on the right always add to the discussion rather than subtract from it. As many religious Republicans oppose the idea of evolution, we don’t need the “conservative” position to come and help add a new perspective to biology.
The upshot is intellectual diversity is a red herring, usually a thinly-veiled plea for more conservatives. Nobody is arguing for more Islamists, Nazis, or flat earthers in academia, and for good reason. People should just be honest about the ways in which liberals are wrong and leave it at that.
I recently discovered a 2019 piece on the Heterodox Academy blog from Musa al-Gharbi, arguing that the intellectual diversity crowd was being inconsistent by rejecting other forms of diversity.
There are some people in the viewpoint diversity movement who enthusiastically argue that political and religious views shape our understanding of social phenomenon – and therefore ideological diversity is important, as is engaging the work of people from various ideological backgrounds. Many of these will even concede that geography makes a difference – for instance, whether a scholar is from (or resides in) the elite beltways v. the heartland can shape how people look at the world. Some take it a step further and argue that socioeconomic background also matters – that people who were born relatively well-off probably have a different set of experiences and priors than someone who came from a humbler background. On this basis, they may support initiatives to better integrate perspectives of small-town, rural or lower-income Americans into the academy alongside lobbying for more engagement with conservative and religious views.
Yet many who recognize the importance of all of the aforementioned factors then arbitrarily draw a line with respect to the ways race and gender may inform scholarly work — as though it makes no difference in shaping one’s perspective and interpretation of facts should one go through life white, as compared to a minority, or as a man as compared to a woman.
I agree with the point here, and al-Gharbi solves the contradiction by saying we should seek all kinds of diversity. I go in the opposite direction: we should not care about diversity at all. In fact, on certain dimensions we should seek intellectual homogeneity. If selecting for those with healthy intellectual habits gets us an elite without racial, gender, geographic, or socioeconomic diversity, so be it. Same with diversity across academic disciplines, given that many or most of them are fake.
If this causes problems because elites end up highly unrepresentative of the population, maybe we can solve this with a quota system, and select leaders from the relevant demographics through the same institutions and standards. A particularly bad solution is to hold up “diversity” as a goal and distort our entire intellectual culture, which in practice means deferring to less intelligent “experts” over smarter generalists, or believing because a position disproportionately appeals to white men we must have a presumption against it.
“Specialization” and “intellectual diversity” go hand-in-hand. Both sound good, and are ultimately rooted in a desire for a more egalitarian intellectual culture. In this view, a PhD in education is just as much of a “doctor” as a Noble Prize winning physicist, and has much more credibility when talking about how to teach children. Not everyone can engage in careful analysis of data in a way that can withstand scrutiny, but lower the standards enough and create enough fields, and a lot of people can be experts. Since people aren’t getting smarter, more experts just means that the average intelligence of those influencing public policy drops, in the same way as it does for the average college graduate as we expand access. Each side in our political debate cares more about who the experts are than whether they make good decisions. Liberals say we need more minorities and women, and as the ascendant populism of the right comes to embrace some of the worst features of the left, we see similar demands for representation of rural whites and Republicans. Whether diversity adds or subtracts from a field or an intellectual community with the addition of more people from any particular group must be decided on a case by case basis.
Tolstoy famously wrote “Happy families are all alike; every unhappy family is unhappy in its own way." Intellectual life is a lot like that; insightful people tend to share similar priors and cognitive traits, where those who are wrong are very diverse in their thinking. There are a lot of ways to be wrong about COVID-19. You can be the kind of person who trusts the experts, in which case you’ll be right on vaccines and wrong on masking children. If your strategy is to reject what the experts say, you end up with the opposite problem. One can imagine a thought process steeped in the appeal to nature fallacy, which rejects all medical interventions, or one that is too eager to try any kind of snake oil advertised on social media. If you study the various people and communities that have been led astray during COVID-19, you will find all kinds of interesting combinations of beliefs with their own internal logics.
The list of ways to be wrong is truly endless, while the path to truth, or at least some ballpark approximation of what’s true, is extremely narrow.
The failure in Afghanistan was mind-boggling. Perhaps never in the history of warfare had there been such a resource disparity between two sides, and the US-backed government couldn’t even last through the end of the American withdrawal. One can choose to understand this failure through a broad or narrow lens. Does it only tell us something about one particular war or is it a larger indictment of American foreign policy?
The main argument of this essay is we’re not thinking big enough. The American loss should be seen as a complete discrediting of the academic understanding of “expertise,” with its reliance on narrowly focused peer reviewed publications and subject matter knowledge as the way to understand the world. Although I don’t develop the argument here, I think I could make the case that expertise isn’t just fake, it actually makes you worse off because it gives you a higher level of certainty in your own wishful thinking. The Taliban probably did better by focusing their intellectual energies on interpreting the Holy Quran and taking a pragmatic approach to how they fought the war rather than proceeding with a prepackaged theory of how to engage in nation building, which for the West conveniently involved importing its own institutions.
A discussion of the practical implications of all this, or how we move from a world of specialization to one with better elites, is also for another day. For now, I’ll just emphasize that for those thinking of choosing an academic career to make universities or the peer review system function better, my advice is don’t. The conversation is much more interesting, meaningful, and oriented towards finding truth here on the outside.
No comments:
Post a Comment