By Paloma Acevedo*
Conducting an impact evaluation (IE) is a complex exercise full of challenges – if you don’t agree, you probably have not worked on one yet! Some publications, such as ”Learning About Experiments That Never Happened” or blogs about common mistakes in evaluations, talk about some of these challenges. In the workshop “Surveys and Impact Evaluation of Public Policies” recently held in Chile, the session called “Learning from Experience” devoted some time to talk about these difficulties.
We asked an IADB team leader, a government project manager, an expert in data quality and an expert in impact evaluation about the main challenges that they have faced in their experience working with IEs, as well as solutions and lessons they have learned.
The challenges start early with a partner´s skeptical disbelief (what good is this? How much?). Then there are non-trivial coordination issues (How can we wish to coordinate eight institutions that literally speak different languages?).
And then we need to persuade a mother head of household that does not have a free minute to answer a questionnaire that looks like a book? And to cap it all, what if we end up with results that cannot be explained that easily?.
Here is what they told us:
Julia Johannsen is a social protection projects team leader in the office of the Inter-American Development Bank in Bolivia. She told us that the main complications start even before the impact evaluation begins. “What an expensive evaluation!” and “What is it useful for?” are some of the questions that usually concern her counterparts in government agencies.
Moreover, the large rotation in the public sector and the lack of experience and expertise in impact evaluation usually aggravates the problem. To solve this, Julia found it very useful to devote resources to create awareness and to train her counterparts in seminars and workshops.
Co-financing part of the evaluation also helps. Similarly, referring to impact evaluations conducted in other countries and using external technical assistance have been useful in improving both acceptability and quality in evaluations in Bolivia.
Roland Pardo is the Deputy Director of Social Policy in the Economic and Social Policy Analysis Unit in the Government of Bolivia (UDAPE for its acronym in Spanish),where the evaluation of Bolivian Conditional Cash Transfer Program Bono Juana Azurduy is being evaluated.
For Roland, the biggest challenge was one of coordination. In order to conduct the evaluation, it was necessary to coordinate eight different institutions including three financing organizations (IADB, World Bank and a Cooperation Fund), two implementation institutions in the government (UDAPE and the Ministry of Health) and three different data collection firms…
Can you imagine the coordination effort required to manage all of the hiring protocols, integrate the demands of information in just one questionnaire, and ensure data quality?
Roland noted that communication among the different agents was key to overcome administrative and technical difficulties, and close coordination in field supervision is paramount.
Mario Navarrete, from Sistemas Integrales, has been working on ensuring data quality for more than two decades. In his opinion, ensuring data quality faces three main challenges:
(i) Questionnaires that are too long, and are incompatible with the available time and resources (sounds familiar?). He highlighted the importance for researchers to plan ahead and be extremely selective with the indicators that are going to be incorporated into the questionnaire.
(ii) Making bad use of good technologies; information systems like CAPI (Computer Assisted Personal Interview) can falsely give an impression that facilitates the questionnaire’s design process. However – watch out! The effort needed to design the questionnaire is at least as intense as that needed for a paper questionnaire.
(iii) An imbalance between ethical requirements and needs in the field; it is important not to miss the informed consent in a survey, yet it is equally important to write it up in a friendly and understandable way, so that it doesn’t push back respondents (and increase attrition!). Another difficulty that may stem from ethical considerations is the number of limitations on collecting personal data (address, phone numbers, etc.).
These constraints are sometimes so big that it poses a real challenge when attempting to re-contact the sample. Therefore, it is important to find formulas that integrate both ethical and information needs.
Jose Ignacio Cuesta is a Senior Research Manager at J-PAL and has worked in several impact evaluation teams. In his opinion, some of the most important challenges come at the moment of the analysis. After all of this effort, what do these results mean? In order to solve this problem, he points out that the process evaluations are helpful.
For instance, the “Servicio Pais” impact evaluation in Chile found impacts only in certain regions of the country and not in others. How can we explain this heterogeneity? A process evaluation helped. Other common problems are non-compliance and sample attrition – how can you avoid them?
There are no magic formulas. Jose Ignacio suggested controlling the first one with good program focus at an appropriate level. The second one can be improved by ensuring good contact data and strengthening the monitoring and incentive mechanisms of the enumerators to ensure that they are doing their best to find the respondents.
These investments during the design and implementation stages of the evaluation can save a lot of sweat during analysis!
I am sure that many of you have struggled with similar and perhaps even much worse challenges when juggling impact evaluations, and have discovered some creative solutions and lessons learnt. Would you be willing to share them with us?