Policy formulation and development projects based on evidence need – drum rolls here – good data. This can be obtained from routine data collection in country-wide surveys and census, from administrative records or from random-sample surveys.
The quality of the data is critical in any evaluation effort, and mis-measurement can be a major Achilles’ heel in any statistical approach that uses empirical evidence, particularly surveys in developing countries.
This Guideline – Improving the Quality of Data and Impact-Evaluation Studies in Developing Countries – prepared by Guy Stecklov (Hebrew University in Jerusalem) and Alex Weinreb (University of Texas at Austin) for the IDB, provides a lot of insights and a “top ten” list to reduce survey measurement error.
Stecklov and Weinreb offer a “back to basics” two stage approach to minimize error. First, minimizing error during data collection requires embedding elements in this process in order to estimate its effects and overall dimensions. As a secondary line of defense, the authors provide a menu of statistical—particularly econometric—methods that have been developed and used to help avoid the most problematic effects of mis-measurement.
The Guideline focuses at reducing error at the source, offering best practice data collection with a very interesting section on the role of incentives, showing that even high profile, gold-standard evaluation projects – such as PROGRESA – are not immune to the types of data-quality problems.
Ultimately – and perhaps this is the central point – evaluation data itself needs to be evaluated, and the best approach is to seek validation sources, which might be very challenging in developing countries. As an alternative, a pre-testing strategy can be pursued, where pretesting is a substantial element in the refinement of the entire data collection process and not just a single isolated element in the process.
The authors condense their recommendation’s into a top ten list to reduce survey measurement error:
- Follow basic administrative guidelines
- Clarify the “central players” in the region and nationally and be certain to consider ways to work with them and reduce the chance of “spoilers”
- Conduct pre testing for all questionnaires
- Hire relatively large numbers of interviewers, whom should be tested in the course of training, while setting high goals and providing rewards for success
- Interviewers should be assigned using interpenetrating sampling techniques
- Consider all potential errors of non observation, including sampling, coverage and non response
- Include questions that allow ex post identification of different types of error of measurement
- Carefully evaluate whether there are systematic non response patterns that might affect interpretation of findings
- Design clear guidelines for filling missing data, preferably using interviewer teams done shortly after each day of data collection
- Attempt to compare results with those that might be obtainable from routine statistics or other different data sources