As mentioned in a previous blog by my colleague Pablo Ibarrarán, the Office of Strategic Planning and Development Effectiveness of the IDB is publishing impact evaluation guidelines on selected topics and methods.
The last publication of this series – Evaluating the Impact of Science, Technology and Innovation Programs: a Methodological (STIP) Toolkit, co-authored by myself, Gustavo Crespi, Pierre Monhen and Gonzalo Vázquez – reflects a joint effort of our office and the recently created Competitiveness and Innovation Division at the IDB to push the agenda of evidence-based policy making in the area of Production Development Policies.
When we conceptualized and developed this toolkit our main objective was to facilitate the dialogue between STIP specialists and impact evaluation experts, by providing the former a complete and simple review of evaluation methods and the latter some key highlights of the specific challenges of evaluating STIP.
For this purpose, we first discuss in detail the theory of change at the basis of the STIP and indentify the fundamental evaluation questions and the typical outcome of interests. We then devote a great deal of attention to the topic of data: we discuss pros and cons of different data sources, data quality issues, and strategies for data collection.
In what is probably the core of the paper, we analyze in detail the potential application of experimental and quasi-experimental methods to STIP.
For each method, we highlight characteristics and assumptions, practical issues related to its implementation, and strengths and weakness specifically related to the application to STIP.
In an effort to be as practical as possible, we refer to many examples of existing and ongoing impact evaluations of STIP.
Finally, we conclude discussing other key specific issues related to the evaluation of STIP: the timing of effects, intensity of treatment, multiple treatments, impact heterogeneity, externalities, and general equilibrium effects.
Obviously a methodological review is by nature a continuous work in progress. This guideline is no exception. Therefore, any comments, suggestions and corrections are more than welcome.
Enjoy the reading!
P.S. By the way, in the same series you find other two guidelines I previously co-authored: Primer for Applying Propensity-Score Matching and Designing Impact Evaluations for Agricultural Projects. Comments are welcome on those as well!