Copyright © 2017. Inter-American Development Bank. If you wish to republish an article, please ask for permission at [email protected].
By Florencia Lopez Boo and Marta Dormal.
We have learned in this blog that policymakers and researchers in Latin America and the Caribbean (LAC) can measure the quality of their Early Childhood Development (ECD) services in different ways. We also saw, however, that most of these instruments have been developed in contexts (the United States in many cases) that are very different from those we are familiar with in the region. So, what should you do if you want to measure child care quality and administer one or more of these instruments in your own country?
Are we measuring what we want to measure?
This raises the question of what is known by experts as “instrument validity.” It is a broad term that encompasses many concepts, but essentially what it means is: to what extent is this instrument measuring what we think we are measuring? Many of the variables of interest we are interested in measuring in our work are abstract concepts such as warmth, respect and enjoyment communicated by the caregiver to children through verbal and nonverbal interactions.
It is therefore important to confirm that the instrument measures what you want it to measure before you draw any conclusions from the data to inform policy formulation. Note that this is true for any type of instrument, not only for those that measure service quality which we will be using as an example in this article.
Where do I start exactly?
In 2012, an IDB team administered four different instruments designed in the US to measure the quality of child care services in a nationally representative sample of 404 centers in Ecuador. A crucial step in that study for instrument validation was its adaption to the Ecuadorian context. Among other things, this included the obvious, such as translating the instrument into Spanish with a wording that was easily understandable by local respondents and with which they could relate to; as well as more subtle aspects including ensuring that items were culturally relevant. For example, one item asked center caregivers whether they had a pet in the classroom. This had to be eliminated as the practice of a pet was not at all relevant to the Ecuadorian reality.
How can I analyze the data to validate my instruments?
Once the data was collected, we started to think about the following questions: are these instruments performing as expected when they are administered in Ecuador rather than in the US? Do they seem to be measuring what they were designed for? There are several techniques experts use to answer these questions. Among others, one exercise we carried out was called “internal consistency”, which refers to checking whether different items on the same instrument (or its subscales) that propose to measure the same concept produce similar scores.
We also tested using what is called “Confirmatory Factor Analysis” whether the data in Ecuador was a good fit to the structure of how the instruments were designed by the publishers. For example, does the fact that an instrument was divided by its developers into 7 different subscales–which measure different concepts–seem to fit the Ecuadorian data? Finally, another exercise was to look at correlations (this means the connection/relationship between two things) between the instruments and other variables we would expect these instruments to be related to. For example, intuitively, we would expect centers that have a better infrastructure to be more highly correlated to whether they are in an urban setting. Is that what the data tells us?
What can you learn from a validation exercise?
In the case of Ecuador, the results (available in English, but soon in Spanish too!) showed that, overall, these instruments are working in expected ways and show meaningful variability in this context. There were, however, a couple of exceptions that should be taken into consideration: two subscales across the four instruments that measure the level of expressed negativity in the classrooms (for example, whether the caregiver shows irritability, anger or harshness towards the children) did not seem to coincide with the Ecuadorian experience. In fact, this was very consistent with what the IDB team had observed in the centers during fieldwork, where such expressed negativity was basically nonexistent.
In this sense, this validation analysis was important to point out certain aspects of the instruments that are not relevant for the Ecuadorian context and as such should not be considered (or at least not in their current format) in future use by the government or other interested parties.
It is important to point out, however, that these findings did not allow us to draw any conclusions for other contexts in Latin American and Caribbean countries different from Ecuador. Therefore, we strongly recommend countries interested in using these instruments to conduct a similar validation analysis in their own context and to use the Ecuador study as a guideline for the replication of this exercise.
Do you have other ideas of how instruments can be validated in your country? Let us know at @BIDgente on Twitter.
Lee este artículo en español AQUÍ
Florencia López-Boo is a senior social protection economist with the Social Protection and Health Division of the Inter-American Development Bank (IDB).
Marta Dormal is a consultant on Early Childhood Development within the Social Protection and Health Division of the Inter-American Development Bank.
Leave a Reply