The development world talks a lot about ‘the results agenda’.  Of course everyone is in favour of results, but this superficial consensus may disguise important differences of opinion.

I was reminded of this yesterday in Berlin, when I was took part in a discussion of ‘Results and Accountability’, at a meeting organised by the centre-right think tank, the Konrad Adenauer Stiftung and the development finance agency, KfW.  My presentation was on ‘mutual accountability’, an idea of which I am increasingly sceptical. (No doubt I will write about that here in due course.)

The panel before mine was about ‘results’; and I was struck that different people were using the term ‘results agenda’ to mean quite different things.  Here’s what I think they are, and why the differences may be important.

First – using results to justify aid to taxpayers.  This is the results agenda we hear most often from politicians – both the ones who lead development agencies, and members of parliament.  They make the excellent point that taxpayers want to know that their money is making a difference to people in developing countries – and that good evidence about results is essential to persuade an increasingly sceptical public.  My impression is that many development professionals grudgingly accept that reporting results for this reason is a necessary evil, though many people would prefer not to have to do it.

Second – using results to improve aid.  There are some development professionals who are concerned that while aid may have positive effects overall, we do not get the biggest possible benefits because we do not have enough evidence about what works and what does not work, and we are not sufficiently energetic about allocating aid to the programmes and projects which have the biggest impact.

Third, using results to manage aid agencies.  Some leaders of aid agencies want to use results to contain the tendency of some organisations to spread themselves too thin, to sprawl across countries and sectors. A stronger focus on results enables them to re-focus the activities of the organisations they lead. This is something I hear a lot from the senior managers in aid agencies, though from few other people.

Fourth, using results to manage complexity.  Many of the problems we are trying to solve involve supporting the emergence of successful complex systems – social and political institutions, economic change and the formation of various kinds of social capital.  These complex processes cannot easily be broken down into a series of steps which will predictably lead to the outcomes we want to see.   Instead these solutions evolve: taking small steps, finding out what moves in the right direction, and building on progress.  The aid industry’s habit of reducing everything into a series of processes and activities which can be planned, tracked and reported not only fails to support this evolution, it can stifle it by preventing both the innovation and the adaptation that evolution requires.  Focusing mainly on results can enable the aid business to resist the tendency to plan and prescribe, and so create space for the emergence of sustainable local institutions and systems.

 So what?

If you have read this far, you may be wondering whether these differences matter.  Surely we should be glad we all want the same thing, even if our reasons are different?

The problem is that these four motivations for focusing on results take us in broadly the same direction, but not to the same destination.

For example, if an aid agency is concerned mainly about the first results agenda (demonstrating results to sceptical citizens) then it is not going to be tremendously excited about being honest about its failures.  Yet recognizing failure is an essential component of the second results agenda (improving the way aid is used). Which motivation is uppermost in our minds will affect how much effort we put in to identifying and learning from investments that do not succeed.

The four motivations have important implications for time horizons too.  There is a widespread fear among aid professionals that focusing on results for the purpose of having a good story to tell to taxpayers will tend to drive aid agencies towards results which can be achieved and measured in the short run – such as giving people bednets – at the expense of longer term improvements in institutions and incentives. Yet these longer term investments may have a bigger and more sustainable impact overall, even though there may be little to show for them immediately.

While I think this is a reasonable fear, I have not been persuaded that any of the aid agencies who are leading the charge on the results agenda are so far guilty of this kind of short-termism.  Aid agencies can and do emphasize the likely future benefits of investments today, so there is no overriding reason why they should focus only on results which can be measured immediately.  We should remain vigilant about this, but I do not agree with those who think that this risk invalidates the entire results agenda.

The different motivations also have differing implications for whether the results agenda is to be layered on top of existing processes of accountability, or should largely replace them.  If you want a focus on results because you think this creates space for local ownership, to enable donors to support the emergence of local solutions and institutions, then we should be thinking about ‘post-bureaucratic aid’. Our existing systems have tended to lead to excessive outside prescription and micromanagement; and in principle they should not be needed if we can observe directly the results about which we really care. If on the other hand our interest is merely to increase our accountability to taxpayers, then all we have to do is pile better measurement of results on top of everything else we already track.

What can we do?

The four motives for focusing on results do point in the same direction: we have to do a better job of providing evidence for the results chain.

Aid agencies should be able to account for the inputs they have provided, and provide a reasonable estimate of their share to the outputs that are produced.  They should also be able to point to rigorous evidence from impact evaluations which demonstrates that these outputs can be expected to produce the intended outcomes and impact, and they should be able to provide a quantified estimate of the impact of the outputs they have funded.  These outcomes and impact can – and in my view should – be long-term, institutional improvements and not just short-term gains.

My view is not that there should be an impact evaluation of each and every project which an aid agency supports. (I also don’t believe that a doctor should conduct a randomised control trial every time she prescribes a medicine.  What I want to know is that every medicine that is generally prescribed has been previously proven to be effective and safe by a rigorous study.)   All activities on which significant amounts of aid are being spent should either be justified by existing rigorous, independent evidence from previous programmes, so enabling the agency to provide an ex ante  quantified estimate of the impact of providing the expected outputs; or the programme should be run at first on a smaller scale and accompanied by a rigorous impact evaluation to gather such evidence for the future.

A combination of systematic, independent, rigorous impact evaluation and end-to-end transparency would provide information about results which can address all four motivations for the results agenda.   It enables taxpayers to see how their money is being spent and what it is achieving, and enables them to press for that money to be redirected so that it has greater and greater impact.  It will put pressure on development organisations to focus on the interventions in which they will have most impact, and to streamline their processes.  The combination of end-to-end transparency and trusted measurement of impact can replace the burdensome controls and micromanagement which aid agencies have felt obliged to put in their place.

Although there is a model of managing for results which ticks the boxes of all four motivations, it does not follow that aid agencies will necessarily move in that direction.   If we leave the focus on results to people who are mainly motivated by the first agenda – explaining to the taxpayer what difference their money has made – then there is every possibility that they will design systems which do not meet these other concerns. The danger is that aid agencies may not be sufficiently energetic in using information about results to improve the impact of aid; they may focus on short-term gains and ignore long-term impact; and they may add reporting results to the many existing procedures instead of using rigorous results measurements as the basis for radical simplification.  I don’t think this is happening in practice, but it is a risk we must guard against, and one way to do that is to recognise and value all four of the motivations which lie behind ‘the results agenda’.

<Pedants’ corner>  To an old fashioned grammarian like me, there is no plural of the word ‘agenda’. It is already plural, and the singular is ‘agendum’.  Because, you know, we all still talk Latin in Britain.  So I don’t like to say that there are four “results agendas” even though that was the title of one of my powerpoint slides yesterday. Hence the odd sounding title of this blog post.  Oh, and while we are at it, you cannot ‘deliver’ results: you achieve them. </Pedants’ corner>

If you enjoyed this post, please consider leaving a comment below, and perhaps sharing this with other people using the buttons on the left. You can also sign up to have blog posts sent to you by email.

18 Responses to What are the results agenda?

  • Thanks for the helpful blog Owen. Always useful to remember that there are a number of results agendas.

    Oxfam’s paper on “the right results” is well worth a read on this too.

    http://policy-practice.oxfam.org.uk/publications/the-right-results-making-sure-the-results-agenda-remains-committed-to-poverty-r-143490

    It makes the point that different stakeholders might be interested in different sorts of results and/or have different views about the best ways to get the results that they want.

    With, erm, the result, being that clarity about results chains – while very helpful in providing a basis for discussion, and something to be aimed for, not least as it can help to make assumptions/evidence base explicit – may/will not resolve underlying differences in preferences, values, worldviews and politics.

    I’ve talked about this in terms of “impossible geometries?”, borrowing the phrasing from Paolo de Renzio.

    http://www.odi.org.uk/events/presentations/917.pdf

    Perhaps the scepticism about mutual accountabilities is because in the real world it’s often all about multiple – sometimes conflicting – accountabilities?

    Alan – I agree with most of that. But I think the answer is to embrace the results agenda(s) – all of them – not oppose them. Perhaps you agree with that too? But that isn’t the impression I get from the Oxfam paper.

  • Very important issue, as Natsios so eloquently put it, we have to make sure that we separate accountability from countability. Like you I am sceptical of mutual accountability, and an important aspect of this is not to use “accountability” ina fuzzy way.  I can hold someone to account is I have sticks and/or carrots I can actually use to reward or sanction them for their behaviour; and others can similarly hold me to account if they have the same. So accountability at its best equals transparently agreeed measures (whether a priori or not), monitoring, plus sticks and carrots, leading to behaviour change. 
    See also:  http://philvernondotnet.wordpress.com/2011/11/29/is-accountability-of-aid-the-same-as-countability-of-aid/

  • Dear Owen, 
    Owen is back. It is nice to see a development problem explained so concisely and clearly like you do. Also focusing on one of the essential questions. 
    When managing the results agenda from the second viewpoint, I think professionals should also take the last viewpoint on board: the complexity of aid is a reality, and so improvement of aid can only happen taking this complexity on board. 
    What is important to me, and linked to this discussion, is the need to shorten feedback loops. 
    What we see now in the aid management cyclus is a long term cycle that is driven for a big part by conventional wisdom and political agenda (plural of agendum). The DFID approach is typical: the once every new government a total redrafting takes place. I wonder what happens with all the small scale impact lessons learned? Can a mall evidence based program with enormous potential withstand the storm? I would not think so. 
    What we need is systems, like the beneficiary feedback systems, that can provide an impact view before the project ends. Otherwise we will be taking the decision on the next project without knowing what happened before. I know, we need also the long term view…

  • Owen, a very interesting piece (and some of the comments)
    It’s fascinating to hear some of the language of systems thinking permeating through this discussion. I’m not familiar with the aid environment and methods, but I’m wondering if this manifests at some stage in , for instance, visual representations of the complexity that is talked about, in say causal diagrams – characterising system structure and causal relationships, or where necessary definable processes. These relationships can be between “hard”, clearly definable parameters, some of which one might term as “results”, or softer, qualitative, maybe even gut-feel parameters. It ehances the ability to identify the positive (self-reinforcing) feedback loops, negative (self-limiting) feedback loops through which the outcomes play out. By extracting key features, one might show how, for instance, a results-driven system could lead to unintended consequences – for a wider audience. I’m not describing anything particular new here, but I’m interested to see if some of the systems concepts and methods are being applied.
    Owen, is this old ground for you ?

    Owen replies: John: thanks. No, this is not old ground for me. I believe that development has much to learn from this thinking about complex and emergent systems.

  • Dear Owen

    Thanks for this – a hot topic in development.

    It would be good to unpack the implications of the differences in typology.  The example you gave about the need to account to taxpayers would militate against reporting failure did not really persuade me.  That sounds like propaganda!  If an aid donor can have an effective approach to find what isn’t working early in the life of a programme, and either fix the problem or do something different, that is going to be good for reassuring taxpayers that the donor is vigilent about not wasting resource, as well as good for achieving results as well.  Tim Harford’s ideas about starting small and being able to distinguish success from failure are relevant here.

    There is a lot of talk about accountability to taxpayers these days.  The logic here is that especially in times of recession, unless you can show results the support for aid will disappear.  I think however that the support could disappear (or fail to appear) whether or not results are achieved.  There is an important constituency who simply don’t identify with the objective function involved in transfering resources abroad.  They would rather see the resources allocated to reducing domestic taxes or borrowing or increasing spending at home.  Results are a moot question.

    I was a bit surprised not to see explaining results to recipient governments and populations as a driver for the results agenda. The aid “business” need to show recipients that aid is beneficial in order to make the case for continuing to impose transaction costs and reduced sovereignty in return.  Should this be a 5th category?

    The metaphor about everything being evidence based without evaluating everything, as per the medical profession, is important as there does need seem to be a kneejerk response to the results agenda in subjecting almost everything to “rigorous evaluation” (though in most cases “rigorous” does not mean experimental trials.  There is perhaps value to be had in doing rather less evaluation (even) more rigorously).  But much aid is by its nature context specific, as compared to say drug impacts on human health, so relying on evaluation of other programmes as the evidence base may not be as reliable a basis for prescription in the aid business as compared to the, well, prescription business.

    The main apparent risk in the results agenda(s) is that it may focus effort and resources on what is measurable rather than on what is (arguably) important.  If institutional change/capacity building is hard to measure and therefore show results, there could be a risk that those focused on results will stop focusing on that in favour of things which can be measured (like say, treatment of intestinal worms).  Is this is a serious worry or is it special pleading by advocates for programmes which are unable to generate measurable results?  

  • A good post.
    re: Item 1 – its the taxpayer’s money so there must accountability regarding how it is spent – to argue otherwise is to challenge the foundation of any democracy. 
    re: Item 2 – A logical consequence if you agree with Item 1.
    re: Item 3 – Very interesting observation. Having served on a small NGO board on the Audit and Finance Committee; one challenge was the continuing survival of the NGO in the face of uncertain long-term government funding and the subsequent temptation to ‘follow the money’. That is, moving away from core specialities and mission into other areas where the funding lay. I suspect this is the greater driver of spreading out than any desire to build empires. This need to ‘follow the money’ is also what drives the often effective but misleading campaigns to attract private sector donors and which in the long-term has damaged the credibility of the industry as a whole.
    re: Item 4 – the aid industry is an unregulated industry which comprises a continuum ranging from the very good to the very bad and outright predatory; the client base – ie the LDCs, are the same: a continuum of governance structures ranging from those that enhance the provision and protection of public goods investments to those that enhance the collection and distribution of rents. Within this at often times mismatched operating environment which will muddy the evaluation waters, there is an E & E issue.
    E & E means Efficiency and Effectiveness; efficiency being manner in which available resources are allocated and effectiveness being whether the results meet desired targets. In the private sector, the numbers on a balance sheet encompass both factors: we can see how money is allocated/invested and we can see via the bottom line whether we are effective by turning a profit and/or being more competitive than the competition. NGO and Agency balance sheets usually can say a lot about efficiency and very little about effectiveness. It is the problem with these ‘partial’ numbers that are the source of the drive towards greater M & E and the questioning of aid itself at times. Until a way is found to recombine the two E’s, the issues and debate will not go away.
    There is one possibility, which so far seems to be missing in the debate – sovereign bond markets or the lack thereof in so many LDCs. Many NGOs at least, I don’t know about agencies, have designated funds used for investment purposes and back-up for a line of credit. If a system were devised that NGOs and agencies were obligated to carry a certain portion of their designated funds in the bonds of the targeted LDC where they have programmes, then we have the beginnings of a self-correcting mechanism.
    LDCs would not attract significant long-term aid unless they invest in themselves and reduce risk; NGOs and agencies will not provide such aid unless reasonably assured that their programmes are effective and they will recoup their investment. Furthermore, since LDCs will be receiving money through the bond market and not a designated programme, they are free to use that money any way they wish constrained by the need to pay it back with profit to the bond holder at a long-term later date. Hence the need to invest in themselves and reduce risk through encompassing public goods rather than continuing to behave as rentier and distributional systems.

  • I believe one important reason for measuring results is forgotten here:

    Fifth – using results to demonstrate improvement to your intended beneficiaries.

    It is not always so that intended beneficiaries will automatically be able to tell whether your programme has had the desired impact or not, and I do believe that aid agencies should also be accountable to their intended beneficiaries.

    In a recent discussion on this topic, a colleague of mine commented that although their impact indicators were very cumbersome for mainstreamed reporting to donors, they could not be changed. The indicators were agreed upon by the intended beneficiaries, and measured the change the beneficiaries would like to see in order to believe the programme to be in their interest. If this change could not be demonstrated, the beneficiaries would not see the usefulness of the intervention, and, hence, refuse to participate. Furthermore, my colleague explained, beneficiaries’ opinions should always trump donors opinions – at least if you believe that also poor people should have a say in their own lives.

  • great framework to approach a topic where people often talk at cross-purposes.

    for me, ive always seen the first results ‘agenda’ less as using results to justify aid to taxpayers (as international development organisations have little direct contact with the public, and much more with line ministries, donors) and instead more as donors using results to justify their own spending using the excuse of taxpayer value for money to the development organisations. yes, people want to know – at a very broad level – that their money is making a difference, but a) the results that actually get to the public are hugely diluted (ever seen a press release about capacity development?), b) the public are increasingly skeptical about the whole concept of aid – begging the question, as Owen has asked before, do they care? and, c) if they do care, it is only about the tangile, quantifiable results.

    the ones who actually care are the donor decision-makers, who need the evidence of results to justify funding allocations….which leads me to the third results ‘agenda’: in my experience its not the leaders of aid agencies who want to see results used for prioritisation, its the executive boards/governing bodies/donors for these aid agencies who are constantly pushing for better focus.

    so while the results ‘agenda’ can, i think, have great benefit at the micro programme and project level (cf results for complexity above), at the macro level it is increasingly being used as a carrot and stick for funders, dressed up in the language of accountability

Follow Owen




Email subscription

I want to receive new posts by email

Owen on Twitter