By: Elena Arias Ortiz and Julian Cristia
As a follow-up to our blog post regarding raising more awareness about the importance of evaluations, today we’d like to share with you the impact of evaluating the best way to incorporate educational technologies in Latin America and the Caribbean (the region).
Governments in the region have implemented a wide range of initiatives to promote the use of technology in the classroom. Approximately 10 million laptops have been distributed to public school students, but now it is time to ask: Have these laptops helped students learn better? What types of educational technology programs are more conducive to learning?
There are countless ways to incorporate technologies into teaching practices. However, for the purposes of this post, we shall focus on two broad categories: guided-use programs and non-guided-use programs. Guided-use programs provide clear guidance on the frequency and type of expected use (i.e., they specify the three S’s: Subject, Software, and Schedule). Non-guided-use programs, on the other hand, focus mainly on providing necessary technology resources like computers, the Internet, and general training.
To Guide or Not to Guide? That Is the Question
At the Inter-American Development Bank, we have critically examined the emerging evidence to understand the impacts that can be expected from these programs, and how to maximize them. In our technical note “The IDB and Technology in Education: How to Promote Effective Programs?”, we conduct a comprehensive analysis to provide recommendations on the programs in this area that we should promote in the region.
In this regard, are guided-use programs significantly more effective than non-guided-use programs, or vice versa? How do the effects of these two types of programs compare with those arising from other educational interventions?
Ten years ago, it would have been difficult to answer these questions, but the picture has changed: we now have evidence from at least 15 robust evaluations conducted in developing countries, shedding light on the effects of technology on education programs.
Let us, then, analyze the outcomes of the One Laptop per Child program in Peru, as well as those of a guided-use program, in India.
However, are these two examples the norm? In other words, do guided-use programs tend to be significantly more effective? Our research reveals that the results from the two examples described above highlight a general pattern: programs that guide the use of technology resources produce an increase in academic performance four times greater than programs that do not guide them—particularly in mathematics and language.
The guided-use programs evaluated share certain features, such as:
- Computer use at school, not at home.
- Shared equipment. Students typically share computers and equipment. It isn’t necessary to have one device per student to achieve results.
- The programs seek to foster students’ learning outcomes by focusing on a single subject (e.g., math or language), and tying all components together (infrastructure, content, and teachers’ professional development) to achieve that objective.
- Computer instruction supplements regular class time and emphasizes hands-on exercises in line with the curriculum.
- Technical support. Instructors conducting the sessions are expected to solve only logistical problems and to answer only software-related questions.
As we mentioned at the beginning, there are innumerable ways to integrate technology into the learning process. However, to weigh the effectiveness of guided-use programs against other instruments, we combined our results with those from a recent review of rigorous evaluations of a variety of educational programs in developing countries. These evaluations included reductions in class size and teacher training. Interestingly, guided-use technology in education programs was the most effective in raising academic performance among the 10 types of programs considered.
In contrast, non-guided-use programs were among the least effective; they only outperformed programs that provided cash grants to teachers to buy books or other local inputs.
All That Glitters Is Not Gold
Nevertheless, while it is true that, on average, guided programs generate higher returns to education, they also involve greater “risks”. If these programs are not well designed, they can even be harmful to student learning, highlighting the importance of evaluating small pilot projects before embarking on large-scale expansions.
The findings of the evaluations conducted in this area have significant policy implications. The first generation of technology in education programs in the region has been successful, increasing the presence of technology tools in the classroom. The time has come for all key players in this area—public and private sectors, NGOs, and multilateral organizations—to work actively towards designing and implementing effective programs that can be scaled up to meet our main challenge: improving learning in the region.
***
To learn more about the evaluation, check out “The IDB and Technology in Education: How to Promote Effective Programs?”
Leave a Reply