This article first appeared in Norrag News, in a special edition with lots of interesting articles looking at “Value for Money in International Education: A New World of Results, Impacts and Outcomes”.  

“The way to get things done is not to mind who gets the credit.”
Benjamin Jowett, English Clergyman (1817-1893)

It seems perverse at first to oppose measuring the results of overseas aid.  It would make sense if you work for a merchant bank which is trying to rip off its clients, as was recently alleged about Goldman Sachs by a former employee. If your business relies on your customers being less informed than you about the value of what you are selling them, you can see why even a little sunlight may be dangerous.

But foreign aid is not like that. Everyone I meet who works on development is motivated by the desire to make a difference for people in the developing world. So why is there so much anxiety among serious and sensible people about the idea that we should measure and make public the results of aid?  There are seven common criticisms of the so-called ‘results agenda’.

First, it may add to bureaucratic overload.  Collecting information about results is yet one more central reporting requirement, in a system that is already overburdened with forms and procedures.

Second, it may make aid less strategic. The need to produce quantifiable results may tend to make donors prefer a short-term investment in something which can be measured instead of an investment which which may have more significant, longer-term but less quantifiable benefits. Do we want to spend more on bed-nets today, if that is at the expense of investing in a country’s capacity to manage its own health system in future?

Third, it may impose the wrong priorities. We know that aid works best when it truly responds to the priorities of developing countries themselves. But if donors are trying to target a handful of results indicators – such as the number of children in school – then this may reduce their flexibility to get behind government programmes.

Fourth, it may ignore equity.  If we reduce aid programmes to numerical totals, we may neglect the people in deepest need, who are often the hardest and most expensive people to reach.

Fifth, it may create perverse incentives. We have seen in our own public services how poorly designed targets can distort choices in unhelpful ways. A target to increase school enrolment may lead governments to pay too little the quality of education, for example.

Sixth, it may inhibit partnership.  Aid is more effective when donors work together with developing country partners and with the private sector.  The necessity of being able to quantify and demonstrate the results of each aid programme makes it harder for everyone to get together in a common cause.

Seventh, the results information is all bogus anyway. Claims about results must rely on assumptions about the counterfactual.  Even the most rigorous impact evaluations do not reliably tell you what will happen when a slightly different project is implemented in a different context. Effects may be described within the boundaries of the project itself, but it is much harder to understand the broader effects of aid on the political economy of the country or its macro-economy.  In the absence of a common framework for attribution, every one of the organisations through which the same money passes claims all the results of the programmes which it finances, leading to massive double-counting and exaggeration.

Underlying all these seven worries is a sense that the push to measure results is insulting to the development profession.  Years of training and experience of working in difficult and nuanced situations cannot be replaced by an information system which reduces each aid project to a few numbers.  Nor do aid professionals need targets to incentivize them to make the most of the budgets under their control: that’s the entire raison d’etre of their professional work.

All these concerns are valid and important; and yet I remain a strong supporter of the results agenda in aid, and (for the most part) I admire the way it is being implemented by the British government and others. I believe that Andrew Mitchell is implementing the results agenda in DFID in a sensible way which pays attention to these risks.

We must set against these concerns the reasons why the results agenda is important.

First, we cannot sustain rising aid budgets in the face of growing public scepticism unless we can demonstrate to the people who pay for aid that it is making a difference.  In March the ONE campaign published an important summary of results which are expected from UK aid between now and 2015.  The numbers were put together for ONE by the reliable aid data geeks (and my former colleagues) at Development Initiatives.  By bringing together information from across the aid programme and simplifying it into a small number of summary statistics, they make a more compelling case for aid than anything we have seen in recent years.

Second, we have a duty to the world’s poor to use money as effectively as we can.  Sadly aid budgets are still too small to live up to the commitment made by world leaders to ‘spare no effort’ to reach all the Millennium Development Goals, and that means we have to make hard choices.  Because the need is so great, almost everything we do with aid will make a positive difference, and it is easy for this to breed complacency.  People making choices about aid should not merely try to do good, but try to do the very best they can so that they help as many people as possible as much as possible.  If the differences between aid projects in the impact for each pound spent were small, we could be somewhat relaxed about spreading aid across many different activities, all of which would bring some benefit.  But as the moral philosopher Toby Ord points out, some interventions are as much as a thousand or ten-thousand times more cost-effective than others, and that means we ought not succumb to the temptation to do a little of everything.

Third, measuring results is the key to unblocking the dysfunctional political economy of aid.  Ineffective aid is more than a nuisance or a waste: it threatens to undermine the whole project.  We can see the natural pressure on politicians to tie aid to domestic firms; to retain discretion to move aid about to respond to the most recent headline; to do a little of everything everywhere, to appease commercial interests and project the national image as widely as possible; and to spend aid on photogenic projects rather than supporting countries through the slow process of institutional and political change.  By contrast the costs of tied, unpredictable, proliferated, projectised aid are invisible, because we do not adequately measure results.  With tangible pressures to be dysfunctional, and in the absence of plausible evidence of the costs, it is no surprise that donors have made such little progress implementing the commitments they made in Rome, Paris and Accra to make their aid more effective.

Fourth and finally, measuring results is the most plausible response to complexity.  There is a growing understanding that development is an emergent characteristic of a complex system. This means that it cannot be reduced to a series of smaller, more tractable problems to be solved independently. We have to support developing countries to experiment, to test new ideas and approaches, track the overall effects, and then be ready to help them to adapt as they find out whether they are heading in the right direction. (This point is well made in a recent Development Drums podcast featuring Tim Harford talking about his book, Adapt: Why Success Always Begins With Failure).  On this view, measuring results must be an alternative, not an addition, to the convoluted plans, milestones and monitoring that can inhibit the flexibility of many aid projects.

How can we resolve the tension between four good reasons for getting better at measuring results, and seven valid concerns expressed by many in the development profession?

It is helpful that there is agreement about ends if not means. Nobody doubts the value of being able to demonstrate to taxpayers that their money has made a difference; of improving how aid is spent; of overcoming vested interests in ineffective aid; or of creating a stronger feedback loop to support evolutionary complex change.  The concerns all relate to how that will happen.

Furthermore, we should recognise that the seven concerns about the results agenda are about risks which have, so far, largely not materialized.  For example, while it is possible that focusing on results could lead some decision-makers to under-invest in strategic, long-term interventions, there is no suggestion yet that this is actually happening. Before the recent DFID bilateral aid review, several people working in DFID expressed privately fears that the money would flow mainly to superficial but easily-measurable projects with little transformational or systemic benefit; all told me afterwards that those fears had proved unfounded.  DFID has made intelligent, nuanced choices about what to support, and through which aid instruments, which suggest that they have not lost sight of the key objective of long term, sustainable, systemic change.

Nonetheless, the point of identifying and articulate risks is to manage them. There are important steps which donors can take to protect themselves from these proper concerns about how the results agenda might be implemented. I propose here a dozen steps which donors could take which would help them to secure the goals of the results agenda, while reducing the risks that many development professionals have identified.  They are divided into three parts: reduce bureaucracy, remain strategic, and increase rigour while remaining proportionate.

Reduce bureaucracy

  • Use reliable results measures to replace, not supplement, existing procedures for tracking how aid money has been used.  In practice that is likely to require a bottom up review of what additional reporting is needed, if any, once good results measures are in place, and getting rid of the rest.
  • Put in place a simple, transparent framework to be used by donors, multilateral institutions, NGOs and other implementing agencies for attributing results to different contributors to a common activity, to avoid double counting and to eliminate the incentive for each donor to ‘go it alone’.
  • Agree a global set of standardised output and outcome indicators as part of the International Aid Transparency Initiative reporting standard, to reduce the burden of reporting on developing country governments and implementing agencies, and to enhance cost-effectiveness comparisons. Then donors should impose a self-denying ordnance that they will track and report results only if they are either an indicator chosen by the developing country itself or if they are one of the globally-agreed standardized indicators.
  • Trust development professionals by giving them more freedom to design and implement programmes to achieve the agreed results, including the freedom to adjust them in real time without needing to seek approval.

Remain strategic

  • Put in place a transparent, simple, common framework for taking account of expected future results (e.g. from investments in capacity), so that strategic, long-term and risky investments are properly valued.
  • Where there are concerns about equity, transparently include this by specifying the premium for marginalised or under-served groups. For example, if you think it is more important to educate girls than boys, say so, and include girls explicitly at a higher weight than boys in the results measures.
  • Make choices about portfolios, not each aid project individually.  A portfolio enables donors to invest in riskier, high-return projects (because the risks are diversified across the portfolio) which they might not support if they consider each project separately.  Focus on portfolio performance in reporting (while also providing detailed information about projects individually for those who are interested).

Increase rigour while remaining proportionate

  • Do fewer, better evaluations. There are still far too many mediocre process evaluations of individual aid projects; these should be substantially scaled back, with part of the savings going in to a smaller number of larger scale rigorous impact evaluations. The net effect of this will be to save money and bureaucracy, while generating more useful knowledge.
  • Reduce the evaluation capacity in each aid agency, putting part of the savings into shared global capacity to do more rigorous and independent impact evaluations.  Evidence about the impact of social interventions is a global public good, so donors should work together to fund and produce it collectively.
  • Put in place a global register of impact evaluations, in which all impact evaluations must be registered when they begin, drawing on the precedent of clinical trials. Such a public register would, at almost no cost, reduce publication bias, prevent unnecessary duplication and spread learning.
  • Recognise that not every intervention should be evaluated. It should often be sufficient for an intervention to set out transparently the existing, rigorous evidence on which it is based.
  • Put in place an Institute for Development Effectiveness, modelled on the National Institute for Health and Clinical Effectiveness (NICE) to examine impact evaluation evidence and provide independent and transparent guidance on cost-effective interventions.  Set a ceiling (say, £10m) above which a programme cannot be funded unless it is supported by an existing independent, published, relevant, rigorous impact evaluation which has been quality assured by the Institute for Development Effectiveness. In the absence of such evidence, a project above the ceiling should go ahead only on a trial basis and only if it includes a rigorous impact evaluation to fill the identified knowledge gap.

Large aid agencies are beginning an uncomfortable transition. In the past they have seen themselves as experts to whom the public has delegated the important job of managing the support we give to the developing world.  Their job was to act on behalf of citizens who were disempowered by lack of information. In the 21st century aid agencies will play a quite different role – in fact, almost the opposite of how they have seen themselves in the past. They must become a platform through which citizens can become involved directly in how their money is used. Some aid agencies will not survive this change: those that do will be the ones which seize the opportunity to provide transparent, trustworthy, meaningful information which empowers citizens to make well-informed choices.  Putting in place a comprehensive, honest results framework is the first step along that road.

If you enjoyed this post, please consider leaving a comment below, and perhaps sharing this with other people using the buttons on the left. You can also sign up to have blog posts sent to you by email.

32 Responses to Seven worries about focusing on results, and how to manage them

  • Re “In March the ONE campaign published an important summary of results which are expected from UK aid between now and 2015.  The numbers were put together for ONE by the reliable aid data geeks (and my former colleagues) at Development Initiatives.  By bringing together information from across the aid programme and simplifying it into a small number of summary statistics, they make a more compelling case for aid than anything we have seen in recent years.”

    I have to object! This sort of egotistical PR continues to mislead the UK public with appealing but false simplicities. It is the sort of thing aid agencies have been peddling for decades, despite the fact they know they are doing something else in practice. In practice DFID works through national governments (and CSOs to a lesser extent) who are (or should be) responsible for delivering the types of services that are described by the One campaign. Yet, I read through the Small Change/Big Difference doc and found only one reference to expected changes in government behavior. In effect it suggest they have no particular role…how weird is that!
    If aid agencies want to count things to tell people about, then they should count the number of country governments who have been able to…put x children into primary schools, vaccinate x children, hold fairer elections, treat x cases of malaria, provide x with safe water, etc. This is what matters if we have any concern at all about sustainable development.
    It is time aid agencies (both Govt and NGO) started treating the British public as adults, rather than as primary school children being encouraged to take part in a charity fund raising event.

    • @Rick

      Thanks. With respect, I disagree.

      You are absolutely right that DFID works mainly through national governments. They do this for at least two reasons: first, because it is important in the long run to contribute to more effective state institutions, and because it is clear that aid is most effective when it supports a country’s own priorities and strategies.

      None of that is any reason for DFID not to explain to the British public how many children are in school as a result of the aid that they give. The public knows that this aid is not individually handed out by bowler-hatted civil servants, but managed by intermediary organisations such as governments, international organisations, private sector firms and NGOs. There are circumstances in which it matters enormously to consider the channels by which aid is delivered in different circumstances or for different purposes. As you say, these decisions are important for sustainable development. But the fact that these overall numbers do not distinguish the different ways in which aid is delivered does not make these numbers in any way misleading or inaccurate.

      There is nothing at all wrong with tallying it all up and telling the public what the net effect of the aid has been. That isn’t egotistical: it provides an honest answer to the question the public repeatedly asks: what difference does all this aid make? There is nothing to be gained from a stubborn refusal to provide an accurate, simple reply to this question.

      Owen

  • “while it is possible that focusing on results could lead some decision-makers to under-invest in strategic, long-term interventions, there is no suggestion yet that this is actually happening. Before the recent DFID bilateral aid review, several people working in DFID expressed privately fears that the money would flow mainly to superficial but easily-measurable projects with little transformational or systemic benefit; all told me afterwards that those fears had proved unfounded”

    But look at the DFID Multilateral Aid Review – which rewards the provision of tangible (and so more quantitative short-term) aid rather than the intangibles of capacity development, standard setting (longer-term, more qualitative with less immediate impact – and not such a nice soundbite). Just count the number of speeches Mr Mitchell has given where he mentions the number of extra kids in school and number of bednets, versus those that talk about sustainable changes such as quality of education, or ability of government to manage its own health systems.

  • Hi Owen
    1. It is egotistical when an agency talks about what all their support is doing, without mentioning the fact that they have a partner who they are working with and who is usually more directly involved. I raised this issue years ago in a blog posting re UK NGOs (Where are the partners?)
    2. There is a unit of analysis issue here. Should aid agencies be measuring numbers of children immunised, or numbers of governments who have immunised at least x% of children. It is the performance of governments that matter, and their immunisation coverage is one aspect of their performance. Immunisation numbers by themselves, aggregated across countries, hide more than they tell.
    3. It is not just a matter of being clear about how aid is delivered, but being clear who is and should be responsible for the types of changes aid seeks to facilitate. Using the wrong units of analysis obscures responsibility

  • Owen I think/am sure that measuring results is not enough; one needs to try and understand HOW they were reached and why some results in some contexts and others in other contexts for apparently the same ‘inputs’ and even types of actors. And so, while measuring results (with the other ‘things’ that one needs to document to understand) is ok, rewarding results is dangerous because it implicitly assumes that one knows all about all the possible contexts – or that they don’t matter.

  • Owen, your thoughtful post does not mention a major contributor to solving the dilemmas: greater clarity about the anticipated logic of aid programmes – combined with real-time monitoring of whether the logic is unfolding as expected.
    Many programmes rely on the very simplified summary in a logframe, rather than unpacking and articulating the stream of events that they hope will result from their work. Once that stream is made more explicit (as a results chain / theory of change / programme logic / causal model – call it what you will), managers can begin to see whether the assumed changes actually take place.
    This process (let’s call it proper monitoring) seems to be greatly under-estimated and indeed somewhat ignored – yet the strategic clarity it offers (at little extra cost) is surely worth the effort.

  • Indeed, Catherine, it is surely important to find ways of rewarding honest and transparent reporting – not just reporting of big numbers…

  • Dear Owen, 

    In general I could not agree with you more, except on one aspect: can we compare them with Goldman Sachs, the people who resist result measurement? I think they are as lethal. 

    At Goldman Sachs, of course the people did not go to work in the spirit of  “let us rip off some clients”. The people ripping off where doing this in a context that made them think it was the right thing to do. For some higher cause, such as keeping the financial system going, for the firm, for my family, etc.. 

     In development the same kind of incentives are playing. People resist change, not out of malice, but because they want to stay proud of the work they did. Because they believe in a worldview that does not chime with the latest data on what works and what not. Because they can get along really well with the current minister of Health. 

    What we need is a short feedback loop: measuring results on the fly, getting immediate feedback on how things are working, so we can steer, apply marginal corrections where needed. We need the data for this. I am not pleading for an ad-hoc approach. What I see is that the project cycle is too long to learn anything useful: before the evaluations are in, nobody cares about the subject any more.

    I would like to see respect for seemingly contradictory goals:

     most of the money should get results. Bang for the buck
    important investments, but not too much,  must be risk capital: we want to find out whether it works. 
    Where it is too difficult to measure results, but the activities should be done in any case, we must be open about it, and make a conscious choice instead of an unconscious one. E.g. in human rights work, democratization, some things are difficult to measure although UNICEF has done a good job trying. 

  • I am a little afraid of this number counting. In fact, referring to e.g. education, not only the number of students or the percentage of people participating is important but also the quality of the education provided. This is much more difficult to measure but this is what really contributes to the progress of the country. The same for health, etc… I see that very often, emphasis is put on direct output while we forget about the outcome which is the real impact (the change that has been brought about by the programme). And this is of course a shared responsability with the partnercountry.
    Therefore, I would also plead for using the country indicators while measuring results: this only strengthens the M&E function in the country, but also is a direct contribution to the policy of the partnercountry and measures also the involvement of the partnercountry in the implementation of the programme and the results obtained.

    • @Patrick

      Thanks. There are two separate points in your comment.

      First: what do you count? Do you count inputs (spending), outputs (e.g. school enrolment) or outcomes (e.g. educational improvements). All can be ‘number counting’ (a phrase which you seem to use pejoratively, as if there is something bad about counting things). Clearly if you count the wrong things (e.g. school attendance instead of educational outcomes) than the resulting numbers may be misleading, and the process may set up incentives to do the wrong thing. The answer to that is simple: don’t count the wrong things.

      Second: who counts, and how? Here you emphasise strengthening the country’s own M&E function and the ability of the partner country to monitor its own policies. Clearly this is desirable, other things being equal; and aid agencies do it too little. But aid agencies have done too little to support the systems of partner countries for decades: this has nothing to do with the recent increased emphasis on ‘results’. My view is that the shift towards measuring results which matter (educational equality, reductions in maternal mortality etc) instead of tracking inputs and processes which were of interest only to the donor agencies is likely to lead to MORE investment in partner country systems, not less.

      Owen

  • Excellent piece Owen, I found it crystalises the tension nicely and I broadly agree – we need to know whether we are making a difference or not, but we need to measure the right things and not skew priorities. Only bit I disagree with is that a wrong-headed approach to results has not yet been made apparent – there are many examples where it has, both at small-scale NGO level, and at large scale World Bank number crunching level. The key is what we are measuring – and unfortunately we too frequently measure the wrong thing. As an old hand said to me recently on a project I am developing “you could get great answers to that question, but it is the wrong question – better get a partial answer to the right question”… Jonathan

      

      

    • @Jonathan

      Thanks. I agree: if there are cases in which a wrong-headed approach to results has emerged, then we should make that clear and try to put it right. It is important to identify those cases where something does go wrong so that we can learn what needs to be done to get the approach right. (What I think is unhelpful is people – not you – who say we should not have a results approach at all because there is scope for it to be done badly.)

      Owen

  • As usual, Owen you do a brilliant job of capturing a complex development dilemma and laying out the issues clearly.   As a development practitioner in the USAID world, this problem is THE challenge in trying to do meaningful work while still staying in business.
    I agree with Jonathan that the wrong headed approach to measuring results has not been widely understood or accepted.  Before we can even talk about a common, honest results framework, much more work needs to be done to debunk the simplistic approaches to measuring results that are driven by vertical programs like PEPFAR, the Presidential Malaria Initiative and the Global Fund.   I can’t tell you the number of frustrating experiences I have had in trying to convince donor representatives that indicators like # of MOU’s signed, # of meetings held, # of persons trained, etc.  are easily gamed and not meaningful.   They are certainly not worth the time and money that we have to spend to collect them when that money would be better spent conducting more rigorous approaches to evaluation.
    Sadly, I sometimes find donor reps agreeing with me about all of the weaknesses in the simplistic approaches but saying that those methods are mandated by the Congress or that those are the best indicators people could agree on.
    There is another aspect to the problem of reporting of results that was not fully addressed.  Many organizations that can do a better job of reporting results maintain the simplistic approach because it drives their business model.  We “saved”  xxx of children’s lives, fed  xx children,  xx cases of AIDS averted, xx couple years of protection etc. etc are nice stories that help raise money.  Indeed many development organizations results measurement is driven by the staffs in the fundraising and PR departments.   As long as there is competition for development and charity dollars, there will be strong financial incentives to perpetuate the simplistic approaches to measuring results, especially by the organizations whose work is easily suited to reporting simple results.  Organizations that implement the messy, but critically important work of policy reform and systems strengthening will never be able to compete with organizations that vaccinate children, build schools and deliver bednets– even if those organizations are actually weakening national systems.
     
    While I agree with most of your recommendations, I am skeptical about the feasibility of some multilateral, international agency charged with unbiased measurement of results.  The reforms in results measurement will have to be linked to the donors funding the programs.  This will require difficult culture changes that will have to be fought organization by organization.
     
     

  • Managing worries about focusing on results in development cooperation
    It is hard to see how the measures suggested can overcome the worries you listed, which clearly are real. 
    You develop both the worries and the way to manage largely from the perspective of the donor – the political need to show results, the tax-payers readiness to pay, and the behavior of donor agencies and donor professions. To set out the worries, this works, because here you also bring in what it means to countries that receive aid and their people, including possible distortion of priorities, ignoring equity, perverse incentives and partnerships.
    The need to overcome public skepticism, ensure effective spending and unblock dysfunctional economy of aid is important, but only a part of the challenges with aid.
    While you note that the development professions may see the result focus as insulting, I also think many development professions have bought the positive rationale for being able to demonstrate results.
     In the countries receiving aid, governments now tend to stand accountable to the donors rather than to their people:
    for their choices and actions, how they set out their development policies and priorities, domestic spending, measures for social justice and equity, access to basic services, economic policies and development.
     So, what I miss in your proposed steps that donors should take, is tackling the basic need for countries to turn around accountability to donors towards accountability to their people, agree with their people as well as donors on what results their development plans and investments of domestic and external resources should deliver and how to report on them, whether in terms of measurement of results or monitoring progress.
     It is within such a framework that it makes sense to assess the value of aid and undertake impact evaluations as you discuss. The problem with large impact evaluations we have seen so far, such as for GAVI and the GFATM, is that they are resource input heavy and struggle with attribution. The most important with impact evaluation is to understand the overall contribution to a positive or negative outcome, whether it relates to the policy framework, the effectiveness of the overall investments – where aid effectiveness is a part to study, but most often is a fairly small component.
    You say that citizens (in donor countries) must become involved more directly in how their money is used. This is true, but it is even more important that citizens in the countries receiving aid are involved more directly, both in how the aid money is spent and in the overall domestic spending – including “their tax money”.
    You note the challenge of managing in complexity, and that measuring results is the most plausible response. You may be right – if so this applies to both stakeholders in donor countries and in countries that receive aid. But this is a field where there is a need to test new ideas and approaches, just as you propose.
    In trying to understand some of the challenges in global governance for health, managing in complexity is as relevant and urgent as it is when facing the complexity in partnerships for development. What does complexity, interdependence, interconnectivity, autonomy and inequities mean for global governance if it should help drive positive health outcomes as a way to measure results?

  • Who wouldn’t admire Owen Barder’s tireless campaigning for the results agenda? Alas, that doesn’t make the object of his crusades any more flawless. I “graft” my comments on the four reasons he gives why the results agenda is important. For a more complete criticism of “Value for Money”, see: http://www.thebrokeronline.eu/Blogs/Busan-High-Level-Forum/Value-for-money-or-Results-Obsession-Disorder
    1. It’s certainly a good idea to demonstrate to the people that aid is making a difference. But we shouldn’t take “the people” for a ride by (implicitly) making them believe that the relationship between aid and its effects is a straightforward quantifiable cause-to-effect relationship. The seventh criticism OB mentions is real indeed (and is nowhere refuted). It’s easy enough to say “don’t count the wrong things”, but we know that the more we are shifting from input over output to outcomes, the less reliable our measuring will be. It may be frustrating that those developments that are the most transformational are the least measurable, but it is a fact. And we shouldn’t hide it from “the people”.
    2. Of course we have to use aid money as effectively as we can. But it is simply not true that “almost everything we do with aid will make a positive difference.” In my very long career in the aid sector I have seen numerous examples of apparently positive results at the micro-level that (afterwards) turn out to have a negative effect at the macro-level. (And conversely, what seems to us as bitter failures are sometimes successes in disguise, if Oscar Wilde accepts a wink from the other side of the Channel.)
    3. Consequently, it is anything but sure that “measuring results is the key to unblocking the dysfunctional political economy of aid”. Measuring results would reveal “the costs of tied, unpredictable, proliferated, projectised aid” only if there were a quantifiable proportional causal relationship between aid and its effect. There simply is no such relationship. Sadly the Paris-Accra-Busan-agenda has never conceded this fact.
    4. Sure enough, development “cannot be reduced to a series of smaller, more tractable problems to be solved independently”. But, unless I haven’t understood the first word of it – the CGD Brochure is here on my desk – reducing social, political and economic change into a series of processes and activities which can be planned, tracked and reported, is exactly what “Cash On Delivery”, the pinnacle of “Value for Money”, does.
    I definitely agree with Catherine that knowing how results were reached and understanding the context is vital. The essence of development is fairness, accountability, participation and creating opportunities for people to go their own way… all elements that are absent from “value for money” thinking.
     

  • Marcus

    Thanks for your thoughtful comments.  I think we agree more than you acknowledge.

    In your first comment you say that as we focus more on outcomes so our measurements are less reliable. That’s often true, but we should not use that as a reason to escape our obligation to do the best we can.  This is partly because of the need to demonstrate to people that their aid is making a difference; but also because we have to be sure that we are doing as much good as we can.  If we live in a world in which we can measure and demonstrate outputs, but we do not have reliable evidence linking those outputs to the outcomes which we are trying to achieve, then what on earth are we doing spending all that money on those outputs?

    On  your second point: I agree. Some aid does not make a positive difference and can do harm. I regard that as even more reason to be careful to measure results as best we can.

    On your third point, I wonder if your point has been garbled by your desire to express it concisely? You seem to imply that unless there is a ‘quantifiable proportional causal relationship’ between aid and its effects, we cannot make use of evidence of the costs of ineffective behaviour to push for improvements in aid. That seems to me to be obviously wrong.  I didn’t understand your reference here to the Paris-Accra-Busan agenda, which seems to me to focus far too little on results and far too much on aspirations to reform donor processes.

    On your fourth point, it sounds to me as if you have indeed misunderstood the idea of Cash on Delivery aid.  The idea was originally expressed in a paper which called for a ‘hands-off’ approach to foreign aid – the point of wanting to be ‘hands off’ is partly that we don’t believe that achieving development outcomes can be broken down into a series of predictable, planned, tracked and reported changes. I don’t regard Cash on Delivery as the pinnacle of ‘Value for Money’ but as an alternative way of achieving good results, by allowing and supporting governments and others to explore ways of working in their own context and to be able to innovate, test and adapt without having to follow donor prescriptions.

    On Catherine’s point, other things being equal it is clearly better to know how results are achieved than not to know this.  But it is strange to argue that there are no circumstances in which the main or only thing we want to know is whether something has worked.

    Owen
     

  • A very good post. The elephant in the room regarding criticism of the results agenda is the rent seeking element within the aid industry since by focusing upon results there is an implicit rejection of the 7% argument. 

    I will restrict my further comments to Owen’s third and fourth points since in terms of direct impacts, his arguments regarding the focusing of results I think are sound.

    3) Aid is more than about direct impacts; there are also impacts upon the aid recipient government’s policy directions, its ongoing nature of institutional development and macroeconomic policy as a whole. These items are matters best referred to the aid recipient government and its constituency to evaluate. Unfortunately this does not happen as much as it should since aid is a donor driven rather than a recipient driven industry. My best personal example is the manner in which when soliciting stakeholder inputs, a country’s elected representatives will at times be excluded from the process.

    4) Democratic countries evaluate results on a routine basis through elections. In general, these evaluations are not based upon any econometric model but rather the judgement of the electorate upon which leader/party has the best capacity to make credible commitments in what is always an unstable and unpredictable social, economic and political environment.

    The call for an Institute for Development Effectiveness appears to be similar to the Independent Commission for Aid Impact. Be this as it may, setting up an independent body with the powers of an Auditor General to evaluate and report upon specific aid programmes is a very good idea.

  • I sincerely appreciate your kind reaction, Owen. Seeing your dedication, I share your view that we certainly agree on many important issues. As for our opinion on the results agenda however, I’m not so sure. If you have read my article on “The Broker” you must have come to that conclusion already.
    Let me come back to the same 4 points.
    1. Yes you are right: “we live in a world in which we can measure and demonstrate [some] outputs, but [in the large majority of cases] we do not have reliable evidence linking those outputs to the outcomes which we are trying to achieve.” And the more I learn about social and political processes the more evidence I find to support this thesis. So, yes, as you say: “what on earth are we doing spending all that money?” It is more than high time that we drop our pretention about what aid can achieve (which is not the same as saying that aid is useless!).
    2. To insist that we “measure results as best we can” may sound convincing at first glance, but it entails the risk that we take partial and half-good data for reliable data.
    3. Thanks for giving me a second chance here. Let me start by saying that the aid-effectiveness agenda (“Paris”, for the sake of brevity) is a bit of a misnomer. Dictionaries are unanimous: effectiveness is the degree to which an endeavor produces the desired result. So, in reality “Paris” is hardly about effectiveness. It is an input-side agenda. Improvements at the input side are not insignificant, of course not. Not complying with “Paris” bears a cost, i.e. the input of aid is less efficient. Sadly, there it stops, for there is no conclusive evidence – See the reports made in the run up to Busan – that complying with “Paris” leads to more effective aid, i.e. aid that contributes more effectively to development. And I do not think we will ever have such evidence. The reason is that there is no quantifiable proportional causal relationship between aid and its outcomes. Concretely, we will never be able to demonstrate that one million of “non-Paris” aid leads to worse/better outcomes than one million of “Paris” aid. And hence my conviction that measuring results (if at all possible) is not the way to “unblock the dysfunctional political economy of aid”.
    4. Our understanding of the CGD brochure seems to be very dissimilar. What’s going wrong here? 

  • There is another weakness in current approaches to results measurement that has not been mentioned above and which could be addressed through practical reforms.   Results measurement typically considers impact only during the life of the project, not after the project has ended.  If we want project implementers to think seriously about sustainability, then we need to evaluate whether impacts have been sustained after the projects are over.   Little wonder that most programs focus on short term, highly visible results that disappear once project support is removed.
     
    To address this, donors need to structure projects differently.  Instead of a five year project that runs full steam and then ends abruptly at the end of the fifth year, more projects need to be designed for seven to ten years with four to five years of full implementation and three to five years of monitoring and evaluation of post implementation impact.   To make it interesting, contracts can be given deferred performance fees when and if impact has been shown beyond the period of implementation.

    • @Jeff absolutely right. ‘Impact’, for me, includes taking account of the broader effects, on broader communities than just the group of people directly touched by an intervention, and over a longer timescale.

      Owen

  • Another very useful discussion of the results agenda Owen….but I think the discussion is far too focused on donor accountability and not enough on building accountability (and results measurment systems) at the country level.  I wish we would start counting how many countries have adequate systems of their own for collecting and analyzing data on outcomes.  Wouldn’t it be better, for instance, to report to the IDA Deputies (or any other donor authorizing parliament, congress,etc.) on how many countries have credible and reliable systems for reporting on results (to their own parliaments and citizens, and only then to donors).  I am struck that many of the discussions of good governance and anticorruption focus only on two input systems (financial management and procurement) and leave questions of how well a country can count and report on outputs/outcomes on its own (in a particular sector or across sectors) out of the picture….  consider that only 2 of the countries in Africa have complete birth and death registration systems, for instance.  This means that most decision makers at the local level have inaccurate and misleading information on the size and distribution of their own populations.  Decision making at local levels has the most to gain from more, and better, information on results.  Thanks for keeping discussions of the results agenda lively and thoughtful.

    • @Susan

      Yes, but I regard what you are saying as a complement, not an alternative, to making use of the results agenda in aid.

      Decision making at local level has much to gain from more and better information about results – not just from soending financed by aid but omore importantly by domestic resources and other sources of development finance.

      We can go round the world providing a combination of technical assistance, lectures and other forms of suasion towards other governments about this. You may have views n how best to support that. My post was about getting our own house in order too.

      Owen

  • Thanks Owen for the really interesting piece. While accountability for aid results to rich-country taxpayers is important, isn’t it a second-best option compared to domestic accountability for results to citizens in the recipient country? Particularly since aid will soon be dwarfed by other flows – such as natural resource revenues – where systems for domestic accountability for results are strongly needed. After all, aren’t citizens in developing countries “disempowered by lack of information”, and don’t they need to “become involved directly in how their money is used”? 

    • @Maia – Thanks. I don’t think these are alternatives. Rich country taxpayers are entitled to expect that their aid demonstrably contributes to improvements for people in developing countries. A good way for that to happen is for citizens in developing countries to have greater power over how the money is used and how services are managed and delivered.

      Much as many people working in development would like aid to be accountable only to people in poor countries that is not going to happen – and in my view, it should not happen. Taxoayers are right to want to be able to hold their governments to account for how their money is used.

      The implication of many aid agencies is that they are in alliance with people in poor countries, overriding the ill informed or short term instincts of their taxpayers. I think that characterization is almost completely wrong: the interests of taxpayers and intended beneficiaries of aid are generally quite closely aligned: it may be the slightly different interests of the many intermediaries (aid agencies, governments, contractors, individual staff etc) who distort the common interests at either end of the chain.

      Owen

  • Owen I have posted some comments on this post on the Big Push Forward website including some thoughts on how citizens in both ‘developing’ and the ‘developed’ world might be better informed and better linkedhttp://bigpushforward.net/archives/1446

    • Thanks Chris for an excellent blog post.

      I especially agree with you that we need to establish with more evidence whether and how far the risks are materializing in practice.It could well be, as you say, that my anecdotal sense that things have been managed pretty well is in fact completely wrong.

      Owen

  • Re: Chris – “better informed and better linked”

    A cardinal rule in governance is that when establishing a firewall against sub goal pursuits it is best that the players have equity in the operation to protect rather than to prey upon. On that basis consider the comparison between wealth transfers within federal systems between the different levels of government and international wealth transfers between sovereign states. In general I would hazard that the number of perverse incentives and outcomes are less in the former when compared to the latter.

    Under the federal systems it is ultimately the same taxpaying public that funds the transfer, receives the benefit of that transfer and can therefore upon the bases of these direct exposures evaluate the cost benefits, efficiency and effectiveness of the programmes funded. In the international system we have a split: the taxpaying public that underwrites the transfer rely upon second hand information regarding the aforementioned efficiency and effectiveness while the recipients rely upon second hand information regarding the cost benefits.

    Under the federal systems, one method of mutual signalling by government and taxpaying public regarding the efficacy of the transfers and the programmes funded is through the domestic bond market.

  • Re Dan Kyba:  Second Hand Information and Bond Markets.

    The issue of raising money from a group that is different from those that should receive the benefit is clearly an issue for aid agencies, as well as others.   I had a chat with Lachlan our economics advisor on Dan’s post and he didn’t disagree with the central assertion that domestic wealth transfers may well be more effective than international wealth transfers.  But he went on to say  
     
    a)      “to ascribe the fact that there might be fewer perverse incentives due to the financing mechanisms in place seems to be a long bow to draw. Domestic programs are not immune to suffering similar issues,
    b)      true, governments tend to finance domestic spending programs through debt financing, and true, this financing generally comes from both the domestic and international bond markets. And true, this will, to some extent, involve taxpayers who are the ultimate beneficiaries of government spending.  However, to suggest that the taxpaying public is therefore in a position to “evaluate the cost benefits, efficiency and effectiveness of the programs funded” is unrealistic. In a competitive and liquid market for government debt, the price of a bond should reflect the demand and supply of bonds. If investors are disinterested in holding bonds then, like any other good, the excess supply should depress the bond’s price,  Kyba seems to assume that this demand for bonds is then driven (at least in part) by investors’ perceptions of the utility/effectiveness/efficiency of the programs being financed. It’s not beyond the pale to suggest that there is a general disconnect between the purchasers of government bonds (institutional investors) and the recipients of government programs (particularly if they involve social spending, say on health and education, which are targeted at the lower end of the socioeconomic spectrum).
     
    c)       Moreover I’d argue that such institutional investors hold government debt because if its character, not the program that it finances. Sovereign debt has traditionally been viewed as the safest of all debt (unless you’re standing in Athens), and as such it tends to be first considered an integral part of a balanced investment portfolio – rather than a tool used to ensure government fiscal accountability.  Though that’s not to say that bond holders do not ‘discipline’ governments in their own way. This is particularly the case if there is a view that a government has behaved recklessly (cf: Greece). However, I’d argue that this is a different point to what Kyba seems to be suggesting.
     
    d)      There seem to be a few assumptions underpinning Kyba’s view, namely: bond markets are competitive, there is symmetric information between investors and government, and the fact that bondholders care what a government does with its money (or at least care before the point of crisis). I’d argue that each of these points is debatable.
     
    e)      As for the possible international bond market,  which might be implied but Kyba does not suggest, I don’t see a natural extrapolation of Kyba’s argument for domestic bonds financing domestic programs to international bond markets financing international programs. Rather, international bond markets are simply a global extension of what happens domestically anyway. In simple terms, with the deregulation of global capital markets, governments (and corporates) can offer bonds either in the domestic market or in the international market. The choice of these two types of bonds will depend on a raft of economic considerations, such as funding costs, hedging practices, the currency being sought, etc. Importantly, unlike the domestic bond market, when a government sells bonds internationally, the financiers are not taxpayers. Yet the fact that this makes no meaningful difference on the nature of government spending lends further support to the fact that bondholders generally exert little, if any, power over government programming.
     
    On the flipside, however, if Kyba’s argument is correct and domestic bondholders do impose some form of government discipline, it’s hard to see how international bond markets would play the same role in international wealth transfers as they do domestically. If governments sought international bonds markets to finance ODA, then there would, once again, be a massive disconnect between the beneficiaries of the spending programs (vulnerable people in developing countries) and the investors themselves”
     
     All of which suggests to me that direct connections, as suggested in my post http://bigpushforward.net/archives/1446, between citizens in developed and developing countries,  is important as a means of reducing asymmetric information (both groups have privileged knowledge but of different types) and power, and these would provide an important complement to other measures or mechanisms that might be are considered as a proxy for performance.

  • Thanks for the response Owen. If donor/taxpayer accountability for results and domestic accountability for results aren’t subsitutes/alternatives, could they be better complements? I think information on impact and results of aid money could be collected and produced in such a way that it can be used by a domestic audience (including media, civil society, parliaments etc) as well as by the donor/taxpayer audience. One example for a way this could be done is if, instead of just reviewing results for one programme or project, donors pool resources and review whole sectors in a country, including government spending. Such a sector-wide results review would be useful for a domestic audience by showing which projects work well (and could potentially be scaled up) and which don’t work or are more expensive for the same results. It would also capture spillovers between projects and highlight areas where donors/NGOs/govt need to coordinate better. At the same time, it would provide the results data that donors need to report to taxpayers.

Follow Owen




Email subscription

I want to receive new posts by email

Owen on Twitter
Owen on Google Plus