On Tuesday January 24th I had the pleasure of being one of the panelists for the session “experiences in selecting impact evaluation methods for projects” during the International Workshop on Surveys and Impact Evaluation.
My co-panelists were Yyannu Cruz Aguayo, Rodolfo Stucchi from IDB, Francisco Gallego from the Catholic University of Chile and Alessandro Maffioli, also from IDB, who was chairing the session. We talked a lot and the audience came up with several interesting questions.
(For those who, while reading, like to visualize places where conferences and seminars are held, “la Capilla” (the Chapel) is one of the rooms of the ancient San Jose hospital of Santiago, where the workshop was held. The other rooms we had the chance to use for a STATA training, for plenary sessions, and for group sessions, were named “Carmen”, “Purísima” and “Maria Magdalena”, respectively. By the way, we were told by Chilean colleagues and participants to the workshop that the hospital is also famous for the ghosts that apparently have adopted it as their home…)
Here is a synthesis of what was said at la Capilla:
a) What are the characteristics of a program that can influence the decision on the most appropriate methodology to evaluate it?
- Targeting, budget and timing.
- Incentives and limitations that can induce self selection among beneficiaries.
- Data availability.
- The size of the program and its minimum scale for execution.
b) What can be helpful if we want to run an experiment?
- Excess of demand can help us. Disseminating information about the intervention can be a crucial factor.
- Allocation indices let us work on the “limits” of our target populations.
- The fear of being evaluated (What if the program does not have any effect?) is often an obstacle. On the contrary, it should play in favor of designing rigorous impact evaluations in order to expand metrics and targets.
- Pilots are crucial to scale up interventions.
c) And what if we cannot run an experiment?
- We should not insist in trying to modify the design and implementation mechanisms of an intervention, especially when there is no simultaneity between the design of the intervention and of the evaluation strategy.
- We should look for alternative quasi-experimental designs.
d) What can be some advantages and disadvantages of designing evaluations based on administrative data?
- We can use administrative data to evaluate programs that were not necessarily designed to be evaluated.
- The information available covers long periods of time. Even more than ten years with monthly frequency.
- Because of confidentiality reasons sometimes administrative data cannot be exported from the agency that produces and manages them.
- They are good to apply quasi-experimental methodologies, like dif-in-dif and matching, among others.
Leave a Reply