In the last decade, attention has increasingly been placed on measuring and establishing the causality of the impact that development projects or interventions – either privately or publicly funded – have on an outcome of interest.
In many areas such as health, education and social protection there is an emerging consensus on how to estimate the causal impact of interventions. In other areas, such as water, sanitation, or transportation, where randomization or quasi experiments are more challenging, there is a growing body of literature.
Even “softer” interventions in areas such as institutional development, governance or crime prevention, the analysis and the evidence are increasingly “harder”.
It is unquestionable that this surge of rigorous impact evaluations is having a deep effect on policy formulation and project design and analysis in both public and private development entities.
A cursory review of government (developed and emerging), local and international NGOs, think tanks or development bank’s web sites reveals an impressive emergence of impact evaluations and their effect on policy and project formulation.
The number of public and private institutions that will primarily fund projects based on clear evidence on causality is increasing. A quick glance at the syllabi in leading graduate schools suggests that this surge will probably be sustained in the future with newly minted economists, sociologists, or political scientists versant in rigorous impact analysis.
And if one layers and links this surge with the emergence of behavioral economics, one can only conclude that development practice and theory has been nudged to a new and better level.
Is this giving us a false sense of comfort?
It is interesting to note that this convergence has been at the core of the success of many businesses as described in Super Crunchers. In the private sector, no matter how effective a particular treatment is, it will not fly if it doesn’t help the bottom line and add value.
Paradoxically, the increasingly abundant literature in impact analysis (or effectiveness analysis), has not been accompanied with the same rigor and zeal for Cost Benefit or a Cost Effectiveness analysis in development projects.
Without information on costs, impact estimates will not be particularly useful in determining if a specific investment is worthwhile. There is comparatively little discussion on appropriate cost and cost effectiveness (or cost benefit) comparison among interventions.
The decline of Cost Benefit analysis has been highlighted both in the context of World Bank projects by its Independent Evaluation Group, and in economics teaching by Tony Atkinson at the last AEA meeting in Denver.
In his paper, Atkinson argues that welfare economics should be restored to a prominent place on the agenda of economists, and should occupy a central role in the teaching of economics
. Nevertheless, the decline of Cost Benefit analysis was partially a result of very elastic assumptions on benefits, benefit attribution and causality links that became far too stretched for the comfort of many and the credibility of the results.
But this attribution and benefit identification problem can be approached not only from a welfare economics angle but also from valuing effects derived from impact analysis, as persuasively argued by Anthony Boardman in his Cost Benefit textbook.
In more recent papers, a crucial discussion has emerged, connecting cost benefit and cost effectiveness analysis to issues such as external validity, attribution, distribution (intra and inter-temporal), optimism bias, valuation of multiple outcomes, variance in cost estimates, heterogeneous effect measures, and scale.
Two recent and good examples of this literature – although still limited to health and education – are papers by Patrick McEwan and Dhaliwal, Duflo, Glennerster and Tulloch. Both highlight the challenges and the merits of going back to the future on the back of more rigorous cost effectiveness and cost benefit analysis based on experimental or quasi-experimental evidence.
Estimating impacts (the E for Effect) gives half of what one needs to know. Costing it will provide the other half of Cost Effectiveness, the C. Valuing it based on this evidence might provide a better B, as in Benefits.
Leave a Reply