Development that Works
  • About

    This blog highlights effective ideas in the fight against poverty and exclusion, and analyzes the impact of development projects in Latin America and the Caribbean.
  • Careful: The object of the analysis determines the methodology…not the other way round!



    The object of the analysis determines the methodologyOn Tuesday January 24th I had the pleasure of being one of the panelists for the session “experiences in selecting impact evaluation methods for projects” during the International Workshop on Surveys and Impact Evaluation.

    My co-panelists were Yyannu Cruz Aguayo, Rodolfo Stucchi from IDB, Francisco Gallego from the Catholic University of Chile and Alessandro Maffioli, also from IDB, who was chairing the session. We talked a lot and the audience came up with several interesting questions.

    (For those who, while reading, like to visualize places where conferences and seminars are held, “la Capilla” (the Chapel) is one of the rooms of the ancient San Jose hospital of Santiago, where the workshop was held. The other rooms we had the chance to use for a STATA training, for plenary sessions, and for group sessions, were named “Carmen”, “Purísima” and “Maria Magdalena”, respectively. By the way, we were told by Chilean colleagues and participants to the workshop that the hospital is also famous for the ghosts that apparently have adopted it as their home…)

    Here is a synthesis of what was said at la Capilla:

    a)      What are the characteristics of a program that can influence the decision on the most appropriate methodology to evaluate it?

    1. Targeting, budget and timing.
    2. Incentives and limitations that can induce self selection among beneficiaries.
    3. Data availability.
    4. The size of the program and its minimum scale for execution.

    b)      What can be helpful if we want to run an experiment?

    1. Excess of demand can help us. Disseminating information about the intervention can be a crucial factor.
    2. Allocation indices let us work on the “limits” of our target populations.
    3. The fear of being evaluated (What if the program does not have any effect?) is often an obstacle. On the contrary, it should play in favor of designing rigorous impact evaluations in order to expand metrics and targets.
    4. Pilots are crucial to scale up interventions.

    c)       And what if we cannot run an experiment?

    1. We should not insist in trying to modify the design and implementation mechanisms of an intervention, especially when there is no simultaneity between the design of the intervention and of the evaluation strategy.
    2. We should look for alternative quasi-experimental designs.

    d)      What can be some advantages and disadvantages of designing evaluations based on administrative data?

    1. We can use administrative data to evaluate programs that were not necessarily designed to be evaluated.
    2. The information available covers long periods of time. Even more than ten years with monthly frequency.
    3. Because of confidentiality reasons sometimes administrative data cannot be exported from the agency that produces and manages them.
    4. They are good to apply quasi-experimental methodologies, like dif-in-dif and matching, among others.

    Comment on the post