This question has been on our minds as we analyze the effects of a recent home visit program implemented by the Nicaraguan government (results forthcoming, stay tuned!). The Nicaraguan intervention, like most home visit programs, targets children’s first and arguably most important teachers: their parents. The curriculum aims to strengthen parents’ knowledge of early childhood development and provide concrete examples of activities parents can do with their children to encourage early cognitive, social-emotional, and language skills. In turn, these changes in parenting behavior are expected to result in improved child development outcomes. However, preliminary results show that while the intervention had a positive effect on children’s language and cognitive skills, there is almost no evidence of an effect on parents’ knowledge of early childhood development, attitudes, or parenting practices.
Orazio Attanasio, as well as Heckman, have shown that the effects of a home visit program can be fully explained by increases in parental investments, which have strong effects on outcomes and are complementary to both parental skills and child’s past skills. Parental investments matter greatly for the accumulation of skills. In particular, material investments seem to matter more for cognitive skills, while time investments seem to matter more for socio-emotional skills.
Why would we see an effect on child development but not parenting outcomes?
It turns out this finding is not so unusual. The evidence base regarding the extent to which parenting interventions work to shift parenting practices is fairly thin. When implemented with quality, we know that parenting programs can have a powerful effect on early childhood development in the short term, and on education and labor-market outcomes in the long term. But it is not entirely clear that these programs work through the mechanism we would expect—that is, by changing parenting behavior.
One challenge is that many studies of parenting programs do not actually measure parenting behavior. For example, a recent review of the effectiveness of home visit programs in the U.S. identified 20 home visit programs that had been rigorously evaluated at least once through a high quality experimental or quasi-experimental study. Only about half included standardized measures of parenting behavior collected via direct assessment or self-report.
Among studies that measure parenting behavior, findings are mixed. One home visit program in Bangladesh found large effects on parenting practices (0.55 standard deviations), but in general, the evidence suggests fairly modest effects (ranging from 0.09 – 0.11 standard deviations per one recent analysis). There are also multiple examples of center-based programs targeting parents and children that have a positive effect on child outcomes without any evidence of an effect on caregivers, and others that show improvements in parental knowledge, but no effect on behavior.
A related question is whether the effect of parenting programs on parenting behavior is sustained. Here, the evidence is even thinner, but two examples provide cautionary evidence: A home visit program in Colombia increased the variety of play materials and play activities in the home environment in the short-term (and also improved child development outcomes), but two years after the intervention ended, neither the effects on the home environment nor child development were sustained. Likewise, evaluations of the famous Jamaican home visit program implemented during the 1970s show a 16% improvement in the quality of parent/child interactions immediately following the intervention. However, follow-up studies conducted when the children were seven and eleven years old found no difference in parent/child interactions between intervention and control groups. As noted by the authors of the Jamaican home visit study, it could be that the intervention affected other types of parenting behavior that were not measured—or that did not become salient until the children were older—such as parental investments in education.
Practical implications
Three conclusions emerge:
- First, in order to test the assumptions underlying parenting programs, we need to measure parent outcomes (in addition to child outcomes). This is critical in order to understand how and why parenting programs work (or don’t work) as they are scaled up and implemented in new contexts.
- Second, are we measuring the right types of parent outcomes? Many studies use the same set of standardized tools to measure parenting practices, which is important in order to ensure reliability and comparability across studies. But no single tool is perfect. It may be worth identifying and developing new tools to incorporate different aspects of parenting behaviors, tailored to the context of study.
- Third, in addition to parent and child outcomes, data on the quality of home visit implementation is also key. This is particularly true considering evidence that the home visitor/ parent coaching relationship (e.g., providing parents feedback, explaining visit activities, reviewing activities from previous visits) is often the weakest aspect of home visit implementation.
What do you think? What types of parenting behaviors do the interventions in your country aim to change, and are they measured well? What do we need to better understand the conditions under which home visits and other parenting programs actually shift behavior? The 2nd LACEA BRAIN Conference will be held in Montevideo, U Uruguay, on May 20-22nd, 2020 will be dealing with this type of questions. LACEA BRAIN is now accepting proposals for academic papers on behavioral economics and related topics, send your paper here.
Leave a Reply