Copyright © 2016. Inter-American Development Bank. If you wish to republish an article, please ask for permission at email@example.com.
By Marta Dormal.
Growing evidence suggests that low-quality childcare services can not only fail to produce the desired outcomes in terms of child development, but, in some cases, also be detrimental to participant children. In this blog, we have talked about how process quality – the dynamic aspects of a service, such as child-caregiver interactions – may have a greater impact on child outcomes than structural quality, which refers to more observable indicators such as service infrastructure. We have also presented the various instruments that can be used to measure service quality with recommendations on how to administer them in practice. Let’s build on this knowledge and continue that conversation with a focus on a specific objective of measuring quality: the development of a systematic monitoring process for quality improvement.
Why is it important to monitor service quality?
We know that quality, in particular, process quality, is the key channel through which early childhood services can improve the developmental outcomes of children. While there are several reasons why assessing service quality is important, the one I would like to focus on is quality improvement. Monitoring quality in a reliable, frequent and systematic manner can provide useful feedback to program staff that can be used to inform and improve their practices.
What are the best practices of a quality monitoring system?
The Quality rating and improvement systems (QRIS) provide a set of general best practices that can be used as a starting point for countries interested in revising their current monitoring systems. There are, of course, many aspects that make a good quality monitoring system, but if we focus on the objective of developing a systematic approach to monitoring quality, an ideal system would, among other things:
1. be based on a set of instruments that have been validated in the specific context where they will be implemented;
2. use instruments that capture the specific constructs the program aims to improve, and, ideally, a mixture of process and quality constructs;
3. use the same instruments or collect data consistently in order to measure progress over time;
4. be designed by taking into account the context in which it is developed (e.g., the current financial and human capacity of support systems).
How do countries in Latin America and the Caribbean (LAC) monitor service quality?
What we have observed in the region is that governments that do monitor service quality choose to capture structural quality variables using checklist measures that primarily focus on easily quantifiable aspects of care. A good example is counting the number of books in the classroom instead of measuring process quality variables. A case study comparing the monitoring strategies of several countries in LAC shows that, 70% of the quality indicators monitored by the Government of Ecuador and 57.5% in Mexico are focused on safety issues such as the condition of the lighting, ventilation, waste management, and so on. While these indicators are indeed important, the main concern with this practice is that process variables may be of greater importance for child development than structural ones, and as such, should be the focus of any monitoring efforts.
If there are validated tools to measure process quality, why not use those for monitoring?
The publication “How is child care quality measured?” presents a number of US-developed tools to measure service quality. So why not simply use those instruments for the monitoring of process quality? Some of these instruments have been administered in several countries in the region, but they are not being used by services as a regular monitoring tool.
The main reason is that, while process quality is essential for child development, it is also more complex, time-consuming and costly to measure than structural quality. For example, an instrument that focuses exclusively on process quality such as the Classroom Assessment Scoring System for Toddlers (CLASS), requires expert observation, judgment and interpretation. In fact, evaluators are required to be certified in the instrument (valid for one year) by participating in a two-day training and passing a reliability test.
The main implication is that, while an instrument such as the CLASS can be used for a data collection exercise related to a specific project, it may not realistically be used by services in the region to monitor quality in a frequent, systematic manner.
It is therefore essential not only to encourage governments in the region to measure the quality improvements of their services, but also to improve the measurements that are available to them to do so.” Marta Dormal.
How can we improve the instruments to monitor quality?
If a central recommendation for services in the region is to monitor process quality in a systematic way, then should it not be accompanied by simplified, cost-effective tools to accomplish such goal? But how can these tools be developed?
There is no one answer to this question. It appears clear though, that a first important step is to analyze the validity of the US-developed tools when administered in a LAC country. We have conducted this type of analysis for Ecuador. In the process, we have also compared the correlations between the more complex and the simpler US-developed instruments that measure quality (in terms of their administration, scoring, costs, etc.). We have then identified the subscales of the simpler instruments that appear to be most associated with process quality measured by the more complex instruments, such as the CLASS.
A follow-up study could build on this work by using (parts of) these subscales to create a simpler, checklist-like tool to monitor quality. Ideally, this new checklist would then be piloted in a country in the region, together with an existing instrument such as the CLASS and a measure of child development. The objective would be to study the association between this simplified tool and the other two measures to determine if it does, in fact, capture the dimensions of quality that are critical for the healthy development of children.
How do you think we can develop effective quality monitoring tools for ECD services in the region? Share your ideas and experience in the comments section below or by mentioning @BIDgente in Twitter.
Lee este artículo en español AQUÍ.
Marta Dormal is a consultant on Early Childhood Development within the Social Protection and Health Division of the Inter-American Development Bank.