Let us reflect on two dates. The first one is 1440: the approximate year in which Gutenberg invented the printing press. The second one is 2022: the year in which OpenAI launched ChatGPT, technology capable of generating language, which by January 2023 already had more than one hundred million users. The world was not and is not the same before and after those two dates.
When the printing press emerged, reading and writing were limited to only a select few. The invention of the printing press enabled the mass production of texts and books. Anyone could have access to knowledge. Nowadays, ChatGPT enables anyone to write a relatively technical text or someone who is not a programmer to develop code. You simply have to ask the right questions or input the right commands or prompts.
However, both the 15th century’s democratization of access to knowledge and today’s democratization of boosting our creative capacity are purely theoretical. What do I mean by this? That the printing press brought about a transformation that made it theoretically possible for everyone to access books and knowledge. And that this should have been positive in terms of equality. However, the reality is that today, in the 21st century, almost 600 years later, 70% of 10-year-old children in the world are unable to read. Half of 15-year-olds in Latin America and the Caribbean do not understand the texts they read, which means that they are still unable to access the wealth of knowledge made available by the printing press.
Why do we feel threatened by ChatGPT?
What does ChatGPT do that other technologies did not? ChatGPT is part of what is called “generative artificial intelligence (AI).” It is a conversational tool and, unlike other types of AI, it makes predictions and can generate new and original content from patterns in existing data.
What scares us is that, until recently, we used to say that specialized, routine, repetitive, and predictive work that required gathering information and data and following instructions was easily automated. These were tasks that robots were better at doing than we were.
We also used to say that in order to shield people who were in the job market, they had to be trained for all those tasks and jobs in which humans were better than machines: generating connections between concepts that had not been linked before, having skills that enable us to face situations that cannot be predicted, that enable us to use and understand our emotions to solve problems or to create and generate new ideas. Generative AI broke that barrier.
The other threat is that changes are no longer linear but exponential. Each technological advancement accelerates the depreciation and obsolescence of skills in the market.
OpenAI conducted a study of the potential for automation in 1,016 jobs in the United States. The study considered a job to be automatable when technology was able to deliver the same quality while at least halving the time it took to complete it. The result: AI would be able to do 10% of the tasks of 80% of people.
Artificial Intelligence: How Do We Prevent Widening Inequalities?
The key question we must ask ourselves as a society is: “What are the necessary conditions to get the best out of automation for this technological leap to also represent a leap in well-being that does not leave anyone behind?” In reality, the transformative power of the printing press in the 15th century and that of artificial intelligence today are only accessible to a select few.
Why am I saying this? Because the quality of the output generated by ChatGPT or other AI-based tools (MusicLM, GitHub Copilot, DALL-E) depends on the quality of the captions, commands, or prompts that humans input into the system.
This can generate asymmetries and inequalities when using and leveraging AI: as in any conversation, the quality of answers depends on the quality of questions; the quality of dialogue depends on the quality of speakers.
Interestingly, when Mira Murati, OpenAI’s chief technology officer, was asked in a recent interview what problem ChatGPT is solving, the first thing she mentioned was education: “it has the potential to really revolutionize the way we learn” through personalized education.
Education Holds the Key to a More Equitable Future
Technology can actually help solve the great learning crisis facing Latin America and the Caribbean.
For example, to solve its functional literacy challenge, Brazil developed the Letrus Writing Skills Program, which uses an artificial intelligence platform to support the development of student writing in Portuguese. The platform corrects students’ essays and provides immediate feedback through an Automated Writing Evaluation algorithm. Using that information, essays are then evaluated by human teachers who assign final grades.
At the Inter-American Development Bank (IDB) we are helping countries in the region incorporate the use of artificial intelligence to improve educational and learning processes with:
- Early warning systems that use machine learning to reduce exclusion in education and detect which students are at risk of disengagement, from the beginning of the year so that the system can intervene in a timely manner.
- Platforms for accelerating and personalizing learning, which use gamification and can be adapted to support teachers so that each student can learn at his or her own pace. For example, this technology enables us to respond to the educational needs of children with dyslexia or promote learning of such native languages as Quechua thanks to conversational bots.
- Learning assessments through solutions that enable us to test reading fluency and accuracy. At the moment, the most widely used tests are expensive because they require dedicated evaluators. By using artificial intelligence to process student reading, this cost is eliminated.
- Centralized assignment of teachers and students to optimize school choice and vacancy allocation by delivering personalized information on education center options and risks. Thanks to machine learning simulations, we can predict which schools will be oversubscribed and, therefore, which candidates (both teachers and students) are at risk of not being placed in a school. At-risk candidates are sent alerts with recommendations of schools where they would have a better chance of getting a spot and can change their application.
The Problem Is Not Technology; It Is Inequality
All in all, the problem is not called ChatGPT, that is, the problem is not technology. It is the low quality and high inequality of our education and training systems.
Technological advancements must be complemented by redistribution mechanisms so that improvements can benefit everyone. And education is the quintessential tool we, as a society, have for redistributing and enabling access to these benefits for all.