
Designing an impact evaluation that is rigorous, feasible and relevant can be a challenging but rewarding experience. Political, technical, and operational criteria that at first might seem incompatible must be reconciled and made to work in unison. The good news is that there is often a way around these seemingly competing criteria, and a growing number of high-quality impact evaluations are being carried out. The experiences gleaned from these evaluations have now been translated into tools that all of us can use to design better impact evaluations! The IDB’s Impact Evaluation Hub incorporates this state-of-the-art material, which you are invited to access as you work on your next impact evaluation.
And here are three steps to get the process started:
Step 1. Define the evaluation questions most relevant for your program. Impact evaluation questions typically take on the form of “what is the impact of an X on a Y?” In other words, what is the effect of your program or intervention (X) on your final outcomes of interest (Y)? To maintain policy relevance, impact evaluations should aim to fill specific knowledge gaps prioritized by the policy makers and development practitioners that design, execute, and fund programs. Identifying the most relevant questions is much easier when a program is based on a solid diagnostic of the development problem and understanding of existing evidence, and has a clear logical framework, including a well-defined target population and intended outcomes. The Impact Evaluation Checklist will help guide you through the evaluation process. And you can use the Design Template, the Concept Note Template to document the design as you make progress.
Step 2. Identify the appropriate evaluation methodology and sample. Random assignment, considered the “gold standard” of evaluation methods, will typically be used for impact evaluations when feasible. However, there are other situations where based on operational and political conditions or data availability, a quasi-experimental methodology—such as Regression Discontinuity Design, Differences in Differences, or Matching— will be preferred. References on impact evaluation methods, including sector specific applications, can be found in the Design Section. If you are collecting your own data, you will also need a sampling framework and to determine your sample size. The number of observations in your sample will be driven by the effect size that you seek to capture through your impact evaluations (as a rule of thumb, smaller detectable effects require larger sample sizes). The Power Calculation Spreadsheet is a simple tool to help you figure out the right sample size.
Step 3. Draw up detailed plans for your impact evaluation’s implementation. You should think of an impact evaluation as an “operation within an operation”. The impact evaluation will need staff, budgets and planning to be implemented successfully and in close coordination with the program. Timing and coordination is crucial, since baseline data must be collected before the intervention starts, program monitoring is needed to know what happened where, and follow-up data and analysis may be needed at critical moments to feed policy decisions. The Impact Evaluation Hub has several planning tools for this purpose, including the Impact Evaluation Gantt Chart, Data Collection Gantt Chart, and a Budget Template.
Bookmark the IDB’s Impact Evaluation Hub and we invite you to return frequently as you work on your own impact evaluation!
Leave a Reply