Sunday, June 24, 2012

Self-Hating Political Scientist

I tried, I really tried, to ignore the screed at the NYT against political science (especially of the quant variety), but Jacqueline Stevens's rant is such a poor effort that I know it will be widely read and influential.  Why?  Because bad ideas often spread further and faster than good ones (see Clash of Civilizations).

There are so many things wrong with this piece that it is hard to know where to start.  First, I am , of course, much of what this women hates about political science.  I have actually worked not just with Department of Defense dollars but actually in the Pentagon and, dare I say it?, liked it.  I have taken National Science Foundation [NSF] money--about $4,000.  I have used .... data!

Ok, with that disclaimer aside, I guess the only way to address this piece is to go through it from the top..  Otherwise, I might write something as incoherent as Stevens's piece.  Ok, one more disclaimer, I am mighty miffed to see a left-wing political scientist end up being a fellow traveler with the right-wing ones that are trying to de-fund NSF's political science program.  I don't know this person as her work is in political theory, a subfield that I do not know well.

Stevens argues that "it's an open secret" that the creation of "contrived data sets" has failed to produce "accurate political predictions."  Oh, really?  Yes, anyone creating a data set understands that coding political behavior means making assessments and assumptions.  But any other methodology that seeks to generalize about politics also has to make assessments and assumptions.  So, quantitative work will vary, just as qualitative work will, in how well they are performed.  Yes, there are alternatives to using the past in either slices of numbers or in case studies, such as experiments and surveys and game theory--but they will have the same problems.  So, either we go ahead and try to test our hypotheses and figure out whether there are generalizable dynamics or we don't.  If we don't, then we don't need federal grant money or any funding, as we can just think and write without doing the hard work of gathering data via coding or via interviews or whatever.

The second problem with this sentence is this: most of us do not aspire to provide accurate predictions of single events.  Most of us seek to understand the causes of outcomes, which leads us t be able to predict that y is more likely or perhaps only possible if x is present (which she ultimately condemns in her conclusion).  This can lead to predictions.  Indeed, having more understanding should allow us to develop expectations.  Having less understanding or no understanding is probably not the pathway to predicting anything.



Ok, let's go to the examples.  First--the end of the cold war.  Oy.  Yes, we didn't have too many scholars predicting the Soviet Union would fall apart.  There are a lot of reasons for this including a status quo bias where our theories have often tended to predict continuity and not change.  Second, our work is not designed to produce predictions of when a country fails apart.  Third, the accusation that International Relations failed misses the target as the Soviet Union collapsed due to domestic political processes with external forces filtered through these domestic dynamics.  Sure, that just shifts the target from IR to Comparativists, but I would also say that our understanding of how authoritarianism works has always been behind democracy.  Data limitations, for one, made it hard to figure out.  Also, much of our work is based on how institutions work, but institutions tend to be less binding where rule of law is absent.  Fourth, there were lots of elements to the end of the Soviet Union that we did understand pretty well, such as ethnic identity can generate conflict.

The really funny thing about Nancy Reagan's astrologer is that we don't know what she said, so we cannot evaluate her predictions.

Stevens goes on to say that political science didn't get how Al Qaeda changed global politics and that we didn't get Arab Spring.  Um, I guess Stevens doesn't read much stuff, as there was plenty of work out there on globalization, including of violence, that would alter the conduct of international relations--written before 9/11.  On Arab Spring, no, folks didn't predict Egypt's fall, but plenty of NSF funded research on dissent and repression can make sense of what happened including that democracy only seems to have flowered in one spot.  Again, we, aside from a few, do not say x will happen in 2011.

The op-ed then goes onto to say that our errors are caused by adopting the priorities of government by sucking up for money from DoD and elsewhere.  Please.  Yes, we are fad-driven--that we shift some of our attention based on what will get us money--security issues in the 1960's, oil and interdependence in the 1970s; security during the resurgence of the Cold War in the early 1980's; the European Union in the late 1980's; ethnic conflict in the aftermath of the Cold war; terrorism and then counter-insurgency in the 2000's.  So, yes, some of the field and some of the funding does shift--but those are shifts in topics, not so much in what our answers are.  That is, some of us will shift our research to study that which the public wants to know more about (the government is a public institution, right?), but we in general do not shift our answers or game the models to give answers that we think folks will want.  More importantly, isn't this point a contradiction for Stevens?  She starts off by saying that our work does not provide for the public good because we cannot predict but then questions why we might be asking questions to which the public might want answers?

The next paragraph shows how little Stevens understands quantitative political science or modern political science in general, as she cites  a few articles with which I am most familiar.  Fearon and Laitin's [F&L] piece has faced much criticism, including by me, arguing that ethnic grievances matter less and civil wars mostly result from weak states.  First, to be clear, their model does, ahem, predict which countries will have civil wars pretty well--in general, not specific ones at specific times.  Second, arguing that subsequent work disagrees with F&L is not a condemnation of political SCIENCE but misses the point, as science of any kind requires a give and take.  Every prominent piece of work that is published is not venerated, does not settle the question.  The Cederman, Weidmann, Gleditsch [CWG] piece (which I happened to review for the APSR) is an advancement precisely because it was inspired by the F&L article.  Fearon and Laitin made others think harder and have to provide evidence to make their cases better than if no one had written that piece or if it was not based on some hunk of evidence based on reality.  The really amusing thing is that CWG make their case using ... data.  They coded inequalities and made assumptions about what their indicators measured to argue that ethnic grievances do matter.  CWG did rely on heaps of public funding (European, I think, and some NSF) to collect a heap of data to test assertions about various factors and which ones are associated with a higher probability of violence.  It is not a perfect piece, but it causes us to have to re-think the conventional wisdom.

The point here is THIS IS HOW SCIENCE WORKS.  If you want to make general claims (which you must if you want to make any kind of prediction), you need to develop a set of hypotheses based on your logics and considering how others have addressed the question, you need then to develop some sort of test to see if the various hypotheses are reflected in the real world, and then you assess what you found.  The process is part of an ongoing social engagement with other folks studying the same stuff--which means arguing.

The slam she adds is that our conclusions about grievances could be found in the NYT.  Lovely, yes, because everything published in newspapers is right, including this piece?  Um, HA!

Stevens then cites Tetlock's work on experts, saying that we don't do better than chimps throwing darts.  Actually, the line she writes is chimps and darts "would have done almost as well as the experts."  Oh, so we actually do make predictions that are better than random.  That is good, right?  Even if we are not perfect? [I have not read Tetlock's work so I cannot speak to it directly but others can and have.]

"Government can and should assist political scienitsts, especially those who use history and theory to explain shifting contexts, challenge our intuitions and help us see beyond daily newspaper headlines."  Um, isn't that what Fearon and Laitin did?  That they pointed out that state weakness is problematic, perhaps more so than ethnic grievances?  Didn't this explain the challenges we face today given the number of weak states in the world?  Didn't it challenge our intuitions by suggesting that it is not just about ethnic hatreds (oh, and yes, if we want to talk about crap in the newspapers, let's start with ancient hatreds)?  Isn't state weakness not something that appears in most newspapers?  Fearon and Laitin were not entirely right, but they were not entirely wrong either.

"Research aimed at political prediction is doomed to fail.  At least if the idea is to predict more accurately than a dart-throwing chimp." Does she read her own stuff?  She said that chimps throwing darts did "almost as well as the experts."  So, the advantage still goes to the experts. What is better? A lack of expertise?

She concludes by saying that NSF money should be allocated by lottery.  Anyone with a PhD and a "defensible" budget could apply.  This is utterly ridiculous.  How about we then have lotteries for who gets their articles into the prestigious journals and books into the prestigious presses?  After all, if having review processes taint who gets money, won't it taint who gets published?  Of course, we had that debate in the 1990s--perestroika it was called.  An effort to rebuild political science to do away with the hegemony of quantitative analyses.  That is Stevens's goal: witness her last sentence which is not about point prediction at all: "I look forward to seeing what happens to my discipline and politics more generally once we stop mistaking probability studies and statistical significance for knowledge."  Aha!  See, this was not about predicting stuff, it was about quantitative work.  Her animus is not really about failing to predict the end of the cold war--which qualitative analysts also did not predict.  Her target is, despite the token lines in the conclusion about getting some data, quantitative work.

What upsets me she that she condemns political science in general and lines up with those who are ideologically motivated to de-fund political science (because it does bad things like study accountability).  But the NYT would not publish a piece if it said "I was on the losing side of a battle ten years ago to re-structure political science because I don't like quantitative work."

The funny thing is that these folks did win in a way--mixed methods is the way of the 21st century--methodological pluralism is the dominant path.  You use the methods that work the best to answer your questions--at least in IR and Comparative.  In American Politics, quant work is still pretty hegemonic, I guess.

Anyhow, the point is that Stevens does not understand contemporary political science, which makes here a mighty poor advocate for her position.  I guess if political science is being attacked from the far right and from the far left, it must be doing something right.

[For more and better takes on this, see pieces elsewhere, including here and  here.]








3 comments:

Anonymous said...

Nice post.

Anonymous said...

As an economist (who's worked in academia, think tanks, and the Pentagon), I wonder if I recognize some of the elements of this spat. There are economists who work on real problems, but they have to forget most of what goes on in academia -- especially in modern macroeconomics. There are economists in academia who work on abstruse problems of no consequence to anyone not in academia, and who are concerned only with reputation and promotion. The latter group may voice opinions about the practice of the former, but they are generally uninformed and irrelevant. Why should political science be any different?

Eric Rasmusen said...

I'm an economist too. Reading the Stevens piece, I don't see any special condemnation of rational-choice or quant poli sci. Indeed, most of what she says really applies not to academic work, but to "experts" making predictions for the media. True, political scientists missed the downfall of Communism, but none of the quant people were saying anything about the stability of the Soviet Union, were they? Rather, it was the political scientists with "soft", "real world" knowledge based on talking with leaders and analyzing documents who turned out to be entirely misled. What she is really saying is that non-quant work is highly dubious. That would even go for her own field of political theory; I don't think any political theorists were predicting the fall of the Soviet Union either.

p.s.-- actually, there is one area where academic models make lots of sharp predictions--- election results. And you'll get lots better results from a poli sci professor than from a chimp if you wonder who is going to win in Maryland, Obama or Romney.