The development world talks a lot about ‘the results agenda’. Of course everyone is in favour of results, but this superficial consensus may disguise important differences of opinion.
I was reminded of this yesterday in Berlin, when I was took part in a discussion of ‘Results and Accountability’, at a meeting organised by the centre-right think tank, the Konrad Adenauer Stiftung and the development finance agency, KfW. My presentation was on ‘mutual accountability’, an idea of which I am increasingly sceptical. (No doubt I will write about that here in due course.)
The panel before mine was about ‘results'; and I was struck that different people were using the term ‘results agenda’ to mean quite different things. Here’s what I think they are, and why the differences may be important.
First – using results to justify aid to taxpayers. This is the results agenda we hear most often from politicians – both the ones who lead development agencies, and members of parliament. They make the excellent point that taxpayers want to know that their money is making a difference to people in developing countries – and that good evidence about results is essential to persuade an increasingly sceptical public. My impression is that many development professionals grudgingly accept that reporting results for this reason is a necessary evil, though many people would prefer not to have to do it.
Second – using results to improve aid. There are some development professionals who are concerned that while aid may have positive effects overall, we do not get the biggest possible benefits because we do not have enough evidence about what works and what does not work, and we are not sufficiently energetic about allocating aid to the programmes and projects which have the biggest impact.
Third, using results to manage aid agencies. Some leaders of aid agencies want to use results to contain the tendency of some organisations to spread themselves too thin, to sprawl across countries and sectors. A stronger focus on results enables them to re-focus the activities of the organisations they lead. This is something I hear a lot from the senior managers in aid agencies, though from few other people.
Fourth, using results to manage complexity. Many of the problems we are trying to solve involve supporting the emergence of successful complex systems – social and political institutions, economic change and the formation of various kinds of social capital. These complex processes cannot easily be broken down into a series of steps which will predictably lead to the outcomes we want to see. Instead these solutions evolve: taking small steps, finding out what moves in the right direction, and building on progress. The aid industry’s habit of reducing everything into a series of processes and activities which can be planned, tracked and reported not only fails to support this evolution, it can stifle it by preventing both the innovation and the adaptation that evolution requires. Focusing mainly on results can enable the aid business to resist the tendency to plan and prescribe, and so create space for the emergence of sustainable local institutions and systems.
If you have read this far, you may be wondering whether these differences matter. Surely we should be glad we all want the same thing, even if our reasons are different?
The problem is that these four motivations for focusing on results take us in broadly the same direction, but not to the same destination.
For example, if an aid agency is concerned mainly about the first results agenda (demonstrating results to sceptical citizens) then it is not going to be tremendously excited about being honest about its failures. Yet recognizing failure is an essential component of the second results agenda (improving the way aid is used). Which motivation is uppermost in our minds will affect how much effort we put in to identifying and learning from investments that do not succeed.
The four motivations have important implications for time horizons too. There is a widespread fear among aid professionals that focusing on results for the purpose of having a good story to tell to taxpayers will tend to drive aid agencies towards results which can be achieved and measured in the short run – such as giving people bednets – at the expense of longer term improvements in institutions and incentives. Yet these longer term investments may have a bigger and more sustainable impact overall, even though there may be little to show for them immediately.
While I think this is a reasonable fear, I have not been persuaded that any of the aid agencies who are leading the charge on the results agenda are so far guilty of this kind of short-termism. Aid agencies can and do emphasize the likely future benefits of investments today, so there is no overriding reason why they should focus only on results which can be measured immediately. We should remain vigilant about this, but I do not agree with those who think that this risk invalidates the entire results agenda.
The different motivations also have differing implications for whether the results agenda is to be layered on top of existing processes of accountability, or should largely replace them. If you want a focus on results because you think this creates space for local ownership, to enable donors to support the emergence of local solutions and institutions, then we should be thinking about ‘post-bureaucratic aid’. Our existing systems have tended to lead to excessive outside prescription and micromanagement; and in principle they should not be needed if we can observe directly the results about which we really care. If on the other hand our interest is merely to increase our accountability to taxpayers, then all we have to do is pile better measurement of results on top of everything else we already track.
What can we do?
The four motives for focusing on results do point in the same direction: we have to do a better job of providing evidence for the results chain.
Aid agencies should be able to account for the inputs they have provided, and provide a reasonable estimate of their share to the outputs that are produced. They should also be able to point to rigorous evidence from impact evaluations which demonstrates that these outputs can be expected to produce the intended outcomes and impact, and they should be able to provide a quantified estimate of the impact of the outputs they have funded. These outcomes and impact can – and in my view should – be long-term, institutional improvements and not just short-term gains.
My view is not that there should be an impact evaluation of each and every project which an aid agency supports. (I also don’t believe that a doctor should conduct a randomised control trial every time she prescribes a medicine. What I want to know is that every medicine that is generally prescribed has been previously proven to be effective and safe by a rigorous study.) All activities on which significant amounts of aid are being spent should either be justified by existing rigorous, independent evidence from previous programmes, so enabling the agency to provide an ex ante quantified estimate of the impact of providing the expected outputs; or the programme should be run at first on a smaller scale and accompanied by a rigorous impact evaluation to gather such evidence for the future.
A combination of systematic, independent, rigorous impact evaluation and end-to-end transparency would provide information about results which can address all four motivations for the results agenda. It enables taxpayers to see how their money is being spent and what it is achieving, and enables them to press for that money to be redirected so that it has greater and greater impact. It will put pressure on development organisations to focus on the interventions in which they will have most impact, and to streamline their processes. The combination of end-to-end transparency and trusted measurement of impact can replace the burdensome controls and micromanagement which aid agencies have felt obliged to put in their place.
Although there is a model of managing for results which ticks the boxes of all four motivations, it does not follow that aid agencies will necessarily move in that direction. If we leave the focus on results to people who are mainly motivated by the first agenda – explaining to the taxpayer what difference their money has made – then there is every possibility that they will design systems which do not meet these other concerns. The danger is that aid agencies may not be sufficiently energetic in using information about results to improve the impact of aid; they may focus on short-term gains and ignore long-term impact; and they may add reporting results to the many existing procedures instead of using rigorous results measurements as the basis for radical simplification. I don’t think this is happening in practice, but it is a risk we must guard against, and one way to do that is to recognise and value all four of the motivations which lie behind ‘the results agenda’.
<Pedants’ corner> To an old fashioned grammarian like me, there is no plural of the word ‘agenda’. It is already plural, and the singular is ‘agendum’. Because, you know, we all still talk Latin in Britain. So I don’t like to say that there are four “results agendas” even though that was the title of one of my powerpoint slides yesterday. Hence the odd sounding title of this blog post. Oh, and while we are at it, you cannot ‘deliver’ results: you achieve them. </Pedants’ corner>