Infodemic has been a word that is closely related to the Covid-19 pandemic. As the SARS-CoV-2 virus spread and turned into a pandemic, an information epidemic has been going on in the world; what is now referred to as the infodemic. This refers to a rapid and far-reaching spread of both accurate and inaccurate information about Covid-19, making it hard to learn essential information about the pandemic. This is the current state of the world.
Behind the infodemic is the use of technology that has made it possible for anybody to widely share information, disinformation and misinformation. In today’s world, anybody can share information with a global audience using platforms such as social media. This has made it possible for conspiracy theories, fake news, propaganda or even hoaxes to spread easily. Some of this has made it hard to fight the spread of Covid-19 as people cannot tell what is true and what is false.
What role do technology companies play in this?
Tech companies dictate how people consume information. Companies such as Google, Facebook, Apple, Microsoft, Twitter and others have taken over the place of traditional media and their mode of operation has made it easier to share inaccurate information. As we will see, the way these tech platforms are designed to work is what makes them good channels for misinformation and disinformation.
Creating Filter Bubbles
A major selling point of tech-based media companies is the ability to customize content for every person, thus delivering specific content and of interest to each user. This approach has helped tech companies to grow because the users only receive what they want to see and hear, unlike the traditional media where one size was offered to fit all.
Unfortunately, this approach has serious undoing. Customized content means that one can avoid anything contrary to what they want to hear, and still be fed with an endless stream of what they need. People only get to see information that they want and opposing voices are gradually suppressed. You get to see information that aligns with your believes and worldviews, even if it is wrong.
This has helped fan misinformation and disinformation because objective facts are shunned in favor of content that will generate likes and clicks. Recommendation engines give users what they want to see, not what is truthful or factual. This works closely with another innate behaviour of human beings to seek facts that confirm what they already or want to believe.
At the onset of the pandemic, people were looking for answers and narratives to help explain what was happening. Some turned to science, others to pseudo-science, and others to conspiracy theories. The origin of the virus, the mode of transmission, how it was spreading from one region to another and whether the virus was a natural occurrence or a bio weapon were all a matter of speculation.
People formed opinions and narratives to make sense of the world. They would then go online to find information about the same. When they found information that matched with what they already believed, the belief was strengthened further. Consequently, some conspiracy theories gained more momentum through online sources. People believed that 5G networks were the cause of Covid-19, and there was enough content online to support this. Others said that Africans were immune to the virus and there were online communities talking about this.
By helping people confirm what they already assumed, tech companies helped spread misinformation and disinformation.
Celebrities and influencers have a lot of influence on societies today. They can bypass the traditional media and pass their message to millions of people using various tools such as social media.
Unfortunately, the information they pass may not be very accurate, and at times it might be completely wrong. Tech companies were at loss trying to figure out what to do with such people, and the Covid-19 pandemic made things worse. The problem is that when these influential people share information, it is believed by many people who look up to them.
An attempt to regulate the kind of information shared on tech platforms has a serious downside. It is hard to regulate content in a post-truth society, where everything is relative. This is why tech companies did not know what to do with the problem of fake news, until after the pandemic and the 2020 US Election.
Moderation is still a problem because of many factors. The number of users is large. There are also many languages. Political interests exist and some matters are heavily disputed, even among the experts. It is also a delicate matter of freedom of speech because a free society needs space even for dissenting opinions. As Natan Sharansky puts it, free societies are societies in which the right of dissent is protected.
Technology companies should have foreseen that and planned on what to do early enough. Instead of waiting to act when the world was faced with a matter of life and death, there should have been some preset guidelines.