This article first appeared in Norrag News, in a special edition with lots of interesting articles looking at “Value for Money in International Education: A New World of Results, Impacts and Outcomes”.
“The way to get things done is not to mind who gets the credit.”
– Benjamin Jowett, English Clergyman (1817-1893)
It seems perverse at first to oppose measuring the results of overseas aid. It would make sense if you work for a merchant bank which is trying to rip off its clients, as was recently alleged about Goldman Sachs by a former employee. If your business relies on your customers being less informed than you about the value of what you are selling them, you can see why even a little sunlight may be dangerous.
But foreign aid is not like that. Everyone I meet who works on development is motivated by the desire to make a difference for people in the developing world. So why is there so much anxiety among serious and sensible people about the idea that we should measure and make public the results of aid? There are seven common criticisms of the so-called ‘results agenda’.
First, it may add to bureaucratic overload. Collecting information about results is yet one more central reporting requirement, in a system that is already overburdened with forms and procedures.
Second, it may make aid less strategic. The need to produce quantifiable results may tend to make donors prefer a short-term investment in something which can be measured instead of an investment which which may have more significant, longer-term but less quantifiable benefits. Do we want to spend more on bed-nets today, if that is at the expense of investing in a country’s capacity to manage its own health system in future?
Third, it may impose the wrong priorities. We know that aid works best when it truly responds to the priorities of developing countries themselves. But if donors are trying to target a handful of results indicators – such as the number of children in school – then this may reduce their flexibility to get behind government programmes.
Fourth, it may ignore equity. If we reduce aid programmes to numerical totals, we may neglect the people in deepest need, who are often the hardest and most expensive people to reach.
Fifth, it may create perverse incentives. We have seen in our own public services how poorly designed targets can distort choices in unhelpful ways. A target to increase school enrolment may lead governments to pay too little the quality of education, for example.
Sixth, it may inhibit partnership. Aid is more effective when donors work together with developing country partners and with the private sector. The necessity of being able to quantify and demonstrate the results of each aid programme makes it harder for everyone to get together in a common cause.
Seventh, the results information is all bogus anyway. Claims about results must rely on assumptions about the counterfactual. Even the most rigorous impact evaluations do not reliably tell you what will happen when a slightly different project is implemented in a different context. Effects may be described within the boundaries of the project itself, but it is much harder to understand the broader effects of aid on the political economy of the country or its macro-economy. In the absence of a common framework for attribution, every one of the organisations through which the same money passes claims all the results of the programmes which it finances, leading to massive double-counting and exaggeration.
Underlying all these seven worries is a sense that the push to measure results is insulting to the development profession. Years of training and experience of working in difficult and nuanced situations cannot be replaced by an information system which reduces each aid project to a few numbers. Nor do aid professionals need targets to incentivize them to make the most of the budgets under their control: that’s the entire raison d’etre of their professional work.
All these concerns are valid and important; and yet I remain a strong supporter of the results agenda in aid, and (for the most part) I admire the way it is being implemented by the British government and others. I believe that Andrew Mitchell is implementing the results agenda in DFID in a sensible way which pays attention to these risks.
We must set against these concerns the reasons why the results agenda is important.
First, we cannot sustain rising aid budgets in the face of growing public scepticism unless we can demonstrate to the people who pay for aid that it is making a difference. In March the ONE campaign published an important summary of results which are expected from UK aid between now and 2015. The numbers were put together for ONE by the reliable aid data geeks (and my former colleagues) at Development Initiatives. By bringing together information from across the aid programme and simplifying it into a small number of summary statistics, they make a more compelling case for aid than anything we have seen in recent years.
Second, we have a duty to the world’s poor to use money as effectively as we can. Sadly aid budgets are still too small to live up to the commitment made by world leaders to ‘spare no effort’ to reach all the Millennium Development Goals, and that means we have to make hard choices. Because the need is so great, almost everything we do with aid will make a positive difference, and it is easy for this to breed complacency. People making choices about aid should not merely try to do good, but try to do the very best they can so that they help as many people as possible as much as possible. If the differences between aid projects in the impact for each pound spent were small, we could be somewhat relaxed about spreading aid across many different activities, all of which would bring some benefit. But as the moral philosopher Toby Ord points out, some interventions are as much as a thousand or ten-thousand times more cost-effective than others, and that means we ought not succumb to the temptation to do a little of everything.
Third, measuring results is the key to unblocking the dysfunctional political economy of aid. Ineffective aid is more than a nuisance or a waste: it threatens to undermine the whole project. We can see the natural pressure on politicians to tie aid to domestic firms; to retain discretion to move aid about to respond to the most recent headline; to do a little of everything everywhere, to appease commercial interests and project the national image as widely as possible; and to spend aid on photogenic projects rather than supporting countries through the slow process of institutional and political change. By contrast the costs of tied, unpredictable, proliferated, projectised aid are invisible, because we do not adequately measure results. With tangible pressures to be dysfunctional, and in the absence of plausible evidence of the costs, it is no surprise that donors have made such little progress implementing the commitments they made in Rome, Paris and Accra to make their aid more effective.
Fourth and finally, measuring results is the most plausible response to complexity. There is a growing understanding that development is an emergent characteristic of a complex system. This means that it cannot be reduced to a series of smaller, more tractable problems to be solved independently. We have to support developing countries to experiment, to test new ideas and approaches, track the overall effects, and then be ready to help them to adapt as they find out whether they are heading in the right direction. (This point is well made in a recent Development Drums podcast featuring Tim Harford talking about his book, Adapt: Why Success Always Begins With Failure). On this view, measuring results must be an alternative, not an addition, to the convoluted plans, milestones and monitoring that can inhibit the flexibility of many aid projects.
How can we resolve the tension between four good reasons for getting better at measuring results, and seven valid concerns expressed by many in the development profession?
It is helpful that there is agreement about ends if not means. Nobody doubts the value of being able to demonstrate to taxpayers that their money has made a difference; of improving how aid is spent; of overcoming vested interests in ineffective aid; or of creating a stronger feedback loop to support evolutionary complex change. The concerns all relate to how that will happen.
Furthermore, we should recognise that the seven concerns about the results agenda are about risks which have, so far, largely not materialized. For example, while it is possible that focusing on results could lead some decision-makers to under-invest in strategic, long-term interventions, there is no suggestion yet that this is actually happening. Before the recent DFID bilateral aid review, several people working in DFID expressed privately fears that the money would flow mainly to superficial but easily-measurable projects with little transformational or systemic benefit; all told me afterwards that those fears had proved unfounded. DFID has made intelligent, nuanced choices about what to support, and through which aid instruments, which suggest that they have not lost sight of the key objective of long term, sustainable, systemic change.
Nonetheless, the point of identifying and articulate risks is to manage them. There are important steps which donors can take to protect themselves from these proper concerns about how the results agenda might be implemented. I propose here a dozen steps which donors could take which would help them to secure the goals of the results agenda, while reducing the risks that many development professionals have identified. They are divided into three parts: reduce bureaucracy, remain strategic, and increase rigour while remaining proportionate.
- Use reliable results measures to replace, not supplement, existing procedures for tracking how aid money has been used. In practice that is likely to require a bottom up review of what additional reporting is needed, if any, once good results measures are in place, and getting rid of the rest.
- Put in place a simple, transparent framework to be used by donors, multilateral institutions, NGOs and other implementing agencies for attributing results to different contributors to a common activity, to avoid double counting and to eliminate the incentive for each donor to ‘go it alone’.
- Agree a global set of standardised output and outcome indicators as part of the International Aid Transparency Initiative reporting standard, to reduce the burden of reporting on developing country governments and implementing agencies, and to enhance cost-effectiveness comparisons. Then donors should impose a self-denying ordnance that they will track and report results only if they are either an indicator chosen by the developing country itself or if they are one of the globally-agreed standardized indicators.
- Trust development professionals by giving them more freedom to design and implement programmes to achieve the agreed results, including the freedom to adjust them in real time without needing to seek approval.
- Put in place a transparent, simple, common framework for taking account of expected future results (e.g. from investments in capacity), so that strategic, long-term and risky investments are properly valued.
- Where there are concerns about equity, transparently include this by specifying the premium for marginalised or under-served groups. For example, if you think it is more important to educate girls than boys, say so, and include girls explicitly at a higher weight than boys in the results measures.
- Make choices about portfolios, not each aid project individually. A portfolio enables donors to invest in riskier, high-return projects (because the risks are diversified across the portfolio) which they might not support if they consider each project separately. Focus on portfolio performance in reporting (while also providing detailed information about projects individually for those who are interested).
Increase rigour while remaining proportionate
- Do fewer, better evaluations. There are still far too many mediocre process evaluations of individual aid projects; these should be substantially scaled back, with part of the savings going in to a smaller number of larger scale rigorous impact evaluations. The net effect of this will be to save money and bureaucracy, while generating more useful knowledge.
- Reduce the evaluation capacity in each aid agency, putting part of the savings into shared global capacity to do more rigorous and independent impact evaluations. Evidence about the impact of social interventions is a global public good, so donors should work together to fund and produce it collectively.
- Put in place a global register of impact evaluations, in which all impact evaluations must be registered when they begin, drawing on the precedent of clinical trials. Such a public register would, at almost no cost, reduce publication bias, prevent unnecessary duplication and spread learning.
- Recognise that not every intervention should be evaluated. It should often be sufficient for an intervention to set out transparently the existing, rigorous evidence on which it is based.
- Put in place an Institute for Development Effectiveness, modelled on the National Institute for Health and Clinical Effectiveness (NICE) to examine impact evaluation evidence and provide independent and transparent guidance on cost-effective interventions. Set a ceiling (say, £10m) above which a programme cannot be funded unless it is supported by an existing independent, published, relevant, rigorous impact evaluation which has been quality assured by the Institute for Development Effectiveness. In the absence of such evidence, a project above the ceiling should go ahead only on a trial basis and only if it includes a rigorous impact evaluation to fill the identified knowledge gap.
Large aid agencies are beginning an uncomfortable transition. In the past they have seen themselves as experts to whom the public has delegated the important job of managing the support we give to the developing world. Their job was to act on behalf of citizens who were disempowered by lack of information. In the 21st century aid agencies will play a quite different role – in fact, almost the opposite of how they have seen themselves in the past. They must become a platform through which citizens can become involved directly in how their money is used. Some aid agencies will not survive this change: those that do will be the ones which seize the opportunity to provide transparent, trustworthy, meaningful information which empowers citizens to make well-informed choices. Putting in place a comprehensive, honest results framework is the first step along that road.