In well-functioning markets, new ventures either succeed or fail. In contrast, up to 2005, all aid agencies created since 1945 were still in existence (see the book on Aid by Harford and Klein here and here). Given its nature, the aid system exists in an arena that is not subject to the invisible hand of efficient markets. For years the questions asked were, how much aid, and to whom? This fell short of the more important question, is aid working?
The past few years have welcomed a necessary and increasing emphasis on managing for development results, which no longer stops short in its focus on aid volumes. Monitoring & evaluation (M&E) is at the heart of managing for development results. While the M in M&E is often overlooked, it is more noteworthy than first meets the eye. Evaluation, on the other hand, is in the spotlight and clearly not an ugly duckling.
The adoption of randomized control trials for development projects has provided the means for hard evidence of what works and what falls short. Random assignment of treatment and control groups ensures that the two groups are the same, on average, and that differences in the desired outcomes can be attributed to the intervention at hand.
Lessons from successful projects could then be replicated in other contexts. For example, in 2001 the IDB approved the largest loan in its history up to that point, to support expansion of Mexico’s Oportunidades program. The success of this conditional cash transfer program had been demonstrated via evaluation, and was subsequently replicated in many countries in and outside Latin America.
Given that they approach the causality question best of all, economists would agree that evaluation via randomized evaluation presents the most credible source of results evidence. What then of the M in M&E? In other words, what is the role for monitoring?
The IDB is the largest source of development finance in Latin America. With such a large portfolio it is not feasible for the IDB to implement experimental evaluation designs for all its projects, but it does monitor results for every single one.
The bread and butter of successful monitoring lies in appropriate indicator choice and definition – per a project’s vertical logic; as well as accurate baseline collection and target setting. While monitoring leaves the causality question unanswered, it provides clear accountability of outputs – are aid dollars getting spent achieving the desired outputs?
Therein lies the greatest virtue of monitoring – in ensuring the desired outputs are achieved it keeping the treatment linked to its original design. Replicating successful projects in a new context without monitoring for outputs could be just as faulty as it was in the past to focus solely on aid volume in a world where one-size-does-not-always-fit-all.
In order to achieve the success of Oportunidades in the differing country contexts, it was, for starters, necessary to ensure that the cash transfers were indeed reaching the intended beneficiaries as in the case of the host country.
If this were not the case, given differing externalities in the new country environment, monitoring receipt of the cash transfers would serve to raise a red-flag which would then allow for preemptive and corrective action.
Being myself a randomista at heart, I am the first to recognize that well-designed evaluations provide the hardest evidence for results, the E in M&E is in vogue with good reason. Notwithstanding, the M in M&E should not be dismissed as the ugly duckling – monitoring is an important and necessary complement to evaluation: monitoring enables managing for results in portfolios of large scope and can serve as a safeguard in keeping the treatment true to its original design in large scale replications of programs that have been proved successful by evaluations elsewhere.
Leave a Reply