By Anna Crespo
Development agencies are in a privileged position to foster a better understanding of program results and how to improve them. And it is in their interest to do so. After all, they want to sponsor effective interventions. One of these tools—impact evaluation—has been on the rise in international development in the last two decades. Not only can they isolate and measure the effect of an intervention, but the availability of high-quality data has broadened the type of questions that evaluations can answer. But could impact evaluations go beyond their well-known role of contributing to knowledge and accountability, to a much broader role of supporting policy makers in executing projects?
The Inter-American Development Bank Group (IDBG) is no newbie to impact evaluations. In fact, its first step towards their use dates back 20 years, when the government of Mexico sought IDBG support for the first evaluation of its conditional cash transfer program. Since then hundreds of evaluations have been carried out and every year the IDB Group features completed impact evaluations (both in the public and private sectors projects) in its report the Development Effectiveness Overview (DEO).
So what have we learned in this past two decades? The IDBG’s Office of Evaluation and Oversight (OVE) decided to take stock of the organization’s experience so far (full report here). OVE’s focus was on the lessons that could be learned from conducting evaluations in organizations like the IDB Group. By reviewing all public sector operations approved between 2006 and 2016, including technical assistances, OVE identified 531 proposed impact evaluations. Among them, less than 20% (94) had been completed and a little over half (286) were still ongoing, with a view to concluding in the next five years. Among the completed and ongoing evaluations, a clear—and steep—learning curve was identified. Over time, the IDB Group has increased its capacity and engaged in better quality evaluations, which leads to a sturdier knowledge base.
Learning from failure
But the IDB Group also has much to learn from evaluations that were not as successful. Take cancelled evaluations for instance, which comprise about 30% of the cases. Political challenges (30%) and implementation and design issues (34%) are among the leading causes for cancelling an evaluation, when the public loans and technical cooperation operations were not themselves cancelled. Addressing political challenges goes beyond the IDB Group’s capacity and is usually related to changes in government and policy focus.
But many lessons have been drawn from issues with implementation and design. For instance, we know that having an ex-ante well-defined method is associated with a lower probability of cancellation. Currently no public-sector operation with an impact evaluation is approved unless the proposed evaluation method meets minimum quality standards. In addition, the IDB Group has increasingly sought an earlier engagement with authorities in the field, ensuring they have a full understanding of the costs and benefits of committing to the impact evaluation and are in full agreement with the necessary steps.
But let’s get to the point. The real question here is whether impact evaluations can not only be thought of as a tool for learning, but also as a tool for project implementation. Would that generate less fear around costs and political commitment?
A recent study from DIME, the World Bank’s impact evaluation arm, argues that when properly planned and implemented, impact evaluations can actually help deliver projects. The authors of the study used information from World Bank projects to show that those with an accompanying impact evaluation disburse faster.
OVE’s study replicated some of DIME’s analysis, using information from all IDB public investment operations approved between 2009 and 2016. Our study is simpler but the comparison between disbursement rates of projects with and without impact evaluations gave similar results to DIME’s.
What we found is that projects with an impact evaluation have faster disbursements after the second year and are completed on average three months earlier. One potential explanation is that projects with impact evaluations are better designed. Their preparation takes an average of 300 additional IDBG staff hours to be completed. Projects with an impact evaluation also seem to reach approval in a more mature way, taking less time to reach eligibility and start disbursing.
There are certainly drawbacks to this type of analysis—discussed in the paper. But isn’t this worth investigating further since there are indeed chances of improving project execution?
Interested in the IDB Group’s work on impact evaluation? Download our Development Effectiveness Overview 2018 (Chapter 4).
About the author:
Anna Crespo is an Economist Senior Specialist at the Inter-American Development Bank’s Office of Evaluation and Oversight