Chris Blattman asks us to comment on his advice to aspiring graduate student who want to be an impact evaluator:
If your goal is to improve the delivery of aid, and truly advance development, many more skills and knowledge are involved than the randomized evaluation. See here for more. But in short: a well-identified causal impact that arrives two years after the program does not performance management make.
My conclusion is similar, but my perspective is a bit different.
The development business needs much better and more rigorous evaluations. There are a huge number of superficial, poorly designed evaluations whose sole purpose is to tick a box, rather than to find out what works. We would be better off with fewer, but properly designed evaluations.
Like Chris, I think there are ways to do rigorous evaluations other than through randomised trials. But I probably believe more strongly in randomisation as being the most rigorous and effective approach where possible. I think the burden of proof is on those who want to use other methods to show that a randomised trial is impossible, or disproportionately expensive, or unethical, or that their alternative approach is in some way superior. Very often the choice is made out of laziness or ignorance.
But I think there is a big problem in the evaluation industry. Too many evaluations are conducted because they are methodologically interesting – for example, because a researcher has thought of a neat way of randomising, or of demonstrating a new statistical technique. Apparently using well-established and proven techniques of evaluation is too boring: people think they won’t get tenure if they don’t do something new. The result is that evaluations are frequently driven by the interest of the researcher in evaluation techniques, rather than the most important and relevant questions in development policy. We know all kinds of strange things about the use of deworming pills, but we don’t know what the best way is to get girls in school.
And that is why I agree with Chris that you should not specialize in having a PhD in evaluation, and that a broad range of skills is preferable. What we want is people who understand the big picture, and know what are the important questions to ask, and hopefully answer, instead of people who have disappeared up their own navel with an obsession about neat new statistical techniques.