Fake news, viral memes, hundreds of hours of video are uploaded to the internet every minute, 3.80 billion people using social media worldwide, users creating and sharing content everywhere. But have you ever considered what happens after you post a comment, upload a picture or a video on the “net of nets”? As Bill Gates said, “content is king”, but who should take responsibility for its accuracy and validity?
When it comes to content moderation, or the practice of monitoring and filtering user-generated content, it is crucial to focus on the skills of the users. So, the broader question will be, what kind of profile do you need to create good online communities and improve collective intelligence? To answer this question, I would like to go back for a second to the old idea of people versus machines. In the last years, we have been experiencing a profound transformation of our societies and economic model, which have placed technology right in the eye of the storm. And beyond that, it has created what I believe is a false dichotomy. We might not realize it, but the digital transformation we are experiencing is not really about technology. It is about people and talent. And sometimes, we tend to focus the conversation too much on the machine side, instead of on the human side. We must not forget beyond regulations and specific architectures, content moderation depends ultimately on users’ behavior. This is something that we can work on from a policy perspective – because we can educate students to become digital, civic, ethical, and global citizens-.
To illustrate this, let me use two pieces of evidence recently published:
- Let’s move to the world of videogames, where we see a lot of antisocial behavior. I want to refer very briefly to the results of a study, carried out by a team of researchers of Riot Games, the game company behind League of Legends. Many people are familiar with this multiplayer online battle arena video game, as it had 67 million players in 2016 and it currently has approximately 115 million monthly players. The team of scientists has been able to gather a lot of behavioral data. What did they find?
- Only 1% of players can be categorized as “trolls” and they are responsible for 5% of all toxic behavior.
- Most toxic behavior (95%) comes from typical players “having a bad day.”
- banning abusive players and giving them immediate feedback improves the behavior of 92% of players who displayed toxic behaviors. Quite significant.
- And finally: toxic behavior drives people away from that game/platform. It makes other players quit and never play again. You may be surprised to hear that you are 320% more likely to quit and never play again if you encounter a toxic player in your first game.
2. The second piece of evidence is an article by Shane Greenstein, Grace Gu, Feng Zhu, recently published in the Harvard Business School Working Paper Series. They look at ideology and the type of content that different contributors and online communities produce through several platforms. They examine precisely evidence from Wikipedians -looking at articles about U.S. politics-, and analyze the factors that contribute to content moderation. There are basically two strategies: lower participation of more biased/extreme contributors; or moderation of those biased contributors that start producing less biased content. They found that:
- Shifts in participants’ composition account for 80% to 90% of the moderation in the content.
- Collective intelligence becomes more trustworthy when the mechanisms in place encourage confrontation between distinct viewpoints.
- letting the most biased contributors leave the collective conversation, as long as they can be replaced with more moderate views is a good idea.
So, what have we learned so far? Existing evidence tends to suggest that:
- Toxic behavior in online communities is not necessarily about a few trolls but about normal people’s behavior.
- Once you have a bad experience is very unlikely that you come back to that online community.
- We can avoid this kind of behavior by adopting the “stick and a carrot” approach: banning and providing feedback.
- It is the composition of the participants that explains a majority of content moderation.
These points, especially the last one, lead us to the key message I want to convey: content moderation is, for a large part, a result of people.
From this point on, we need to discuss which are the skills users need to avoid biased and radical content, antisocial and toxic behavior and to promote positive interactions and pro-social attitudes that result in good content for all. What skills do we need to thrive in an interconnected world and contribute to creating good online communities and content?
The first and obvious answer that comes to mind is digital literacy, emphasizing the aspects that have to do with digital citizenship and media literacy, but also skills like collaboration, empathy, creativity, ethics, global citizenship, self-regulation and critical thinking.
Let me take two of these #skills21, critical thinking and empathy, and reflect on that. Critical thinking enables people to have informed and ethical engagement with information, digital technologies, and media content. As we are considering here, people are not only consumers of content; they are also producers. Critical thinking is key to support individuals to contribute with verified, respectful, and ethical content. Empathy refers to how to treat each other, respect each other, and recognize and respect the difference.
From an education perspective, how are we training people at these skills to be good digital citizens? In its last PISA assessment in 2018, the OECD tried to measure global competence. It is a first attempt from an international comparative perspective to measure things like respect for difference, sensitivity to other viewpoints, students’ ability to distinguish between right and wrong, or how they understand and critically analyze intercultural and global issues. The findings from this latest PISA round show that fewer than one in ten students in OECD countries was able to distinguish between fact and opinion, based on information they were provided regarding the content or source of information.
They also looked at things related to the well-being of 15-year-old students and social and emotional outcomes. They find that, across OECD countries, just about two in three students reported to be satisfied with their lives. About 6% of students reported always feeling sad. And almost a quarter of students reported being bullied at least a few times a month. This is not irrelevant. Remember that we mentioned above that about 95% of bad behavior in videogames, is explained by ordinary people having a bad day.
Our education systems have not been particularly good at training us for this kind of things. They teach us what computers are best at: specialized, repetitive, and predictive work; accumulation of information and data; and compliance with instructions. Whereas, as humans, we are much better at, and cannot be replaced by a machine, when we interconnect things that have not been linked before; when we are faced with situations that could not be predicted; when we have to use and understand our emotions to solve a problem; or when there is a need for new ideas. Today, our schools are creating second class robots instead of first-class humans, trained to leverage technology, and wisely interact with it.
To conclude, I would like to emphasize the idea that users play a key role in content moderation. Of course, this does not take away the responsibility of platforms -which is critical because they are setting the rules and incentives with which people interact- and we know also that different platforms produce very different online communities. In this context, it is important that online platforms systematically monitor and apply pre-determined rules to make sure that their content is good and -by ethical standards- acceptable. But we shouldn’t arrive at a point where a machine defines the relationships between humans in the virtual space. Beyond the algorithms, it should be the humans using these platforms that define through their behaviors its final content.
Stay tuned and follow our blog series on education, economic opportunities and #skills21. Download the Future is now and keep an eye out for our news!
How can schools and teachers enable critical thinking for students in Latin America and the Caribbean? Leave us your comments in the section below or on Twitter @BIDeducacion #EnfoqueEducación.
—–Note: This blog is based on a contribution to the 2020 United Nations Internet Governance Forum, in the panel “How can policy support participative, collaborative content moderation that creates trust in platforms and the internet?” hosted by Wikimedia Foundation.
Leave a Reply