One thing that you probably needed when you first learned how to count was your ten fingers (or twenty if you count your toes too). So it is very handy that the IDBs new Guide on evaluating education projects comes in five steps. Not easy steps, mind you, but if you take them one at a time, it will save you a lot of time next time you want to know if hiring extra teachers is better than building a new classroom.
And that is not a trivial question. Public spending on education in countries like Colombia or Chile is over 4 percent of GDP. Brazil has 650 thousand teachers, more than the whole population of Bahamas and Barbados combined. India spends more on education than the entire Costa Rican GDP.
Ok. You get the point. Education is big. But it matters not only because it’s big, but mainly because our children’s future hinges on the educational system working for them. And although we know that we need to know much more on what works and what does not, it is very useful to know how to learn more.
These new Guidelines for Impact Evaluation in Education Using Experimental Design by Rosangela Bando provide a starting point in asking the right questions and choosing the right tools. In five steps.
Step one. The first step is to understand what we know and try to know what we don’t. This step requires that you walk the walk of gathering what is already known.
In this section, the Guidelines provide a framework to think about education, its importance, evidence on what matters for education quality and the basic causal framework in order to assess the effects of a Program by comparing outcomes with and without the Program, focusing on the assumptions required in order to identify the Average Treatment Effect.
The reader has to bear in mind that these Guidelines focus on random assignment and do not discuss other methods. If you want to take that side step, you should go to this book (or in Spanish this one or for more advanced readers this one).
Step two. After you know what you know, you need to specify the main research question. In this step, the Guideline provides guidance on how to define the evaluation hypothesis: the why, the who, the what, the when, the where and the how.
A single evaluation cannot answer all questions, so this step establishes criteria to answer for what purpose will the evaluation be used, and its relevance.
The unit of analysis is also important. The author recommends using individual data because within school data tends to have more variation than across schools data. The when is also critical: effects are not constant over time and can fade. Time the evaluation based on when the program is expected to bring about changes. But If your causal chain is loose or your sample does not describe your population go back to step one.
Step Three. The third section establishes a framework for selecting the sample and setting up the randomized impact evaluation. In this step, the author summarizes and presents the main formulas and rationale for power calculations, introduces block or stratified randomization and provides a useful randomization checklist (we at the IDB love checklists)
Step Four. So by now hopefully you know almost all that is known, you have an idea of one thing you don’t know so you have a good question, you have the math down and your theory of change holds water. Now you need the data.
This section provides a very useful overview of data sources, particularly of education standardized tests such as ENLACE, PISA,ONE, Prova Brasil, SIMCE, SABER, TIMSS and PIRLS and TEMA, their use in your own evaluation and how your own data can be constructed consistently with some of these data sources.
Step Five. With data at hand it is now time to analyze it. This section provides guidance on how to calculate program effects, analyze specific groups of interest, how to interpret results and address issues regarding external validity and handle deviation from the original design, changes from the original sample and compliance.
There you go. Five steps. Now walk the walk.