When our view of the world is distorted by algorithms
When you click on one link rather than another, your choice will influence the content you will be shown by various websites further down the line. The algorithms used by social media platforms like Facebook learn what our preferences are and provide more and more content that matches our interests.
The risk is that we will never be shown anything that goes against our opinions, and this can distort our view of the world. “By ever more carefully selecting what we see, these algorithms are distorting reality. Social media platforms effectively become echo chambers in which opinions can become increasingly extreme,” explains Elisa Celis, senior researcher in the School of Computer and Communication Sciences (IC) at EPFL.
Algorithms shaping your opinion
And this can have an impact on the reader. “Numerous studies have shown that if you are undecided about something, your decision will ultimately be influenced by the frequency and order in which you are presented with information. So these algorithms can actually shape your opinion based on biased data,” says Celis. In response to this problem, Celis worked with Nisheeth Vishnoi, professor in the School of Computer and Communication Sciences (IC) at EPFL, to develop a system to prevent users from being fed totally one-sided content.
They designed an algorithm that can be altered to ensure that users are shown a minimum amount of diverse content. “A social media platform could, for instance, opt to have views that oppose those of the user make up at least 10% of the newsfeed to ensure the user’s view of the world remains more balanced,” explain the researchers.
An algorithm that’s just as effective
The algorithm could be easily integrated into current systems. The main challenge is getting the large corporations on board. “For platforms like Facebook, these algorithms have to be effective in order to generate advertising revenue. We wanted to show that it is possible to create an algorithm that is just as effective but that allows content to be customized in a fairer and more balanced manner,” explains Vishnoi.
Raising governments’ awareness of this issue will be a key factor when it comes to filling the legislative gap in this area. Several human rights organizations have already shown interest in the researchers’ project, which they recently presented to delegates of human rights agencies in Geneva, including to members of the United Nations Office of the High Commissioner for Human Rights.
“These algorithms are currently totally unregulated because the impact of the bias they generate is not yet properly understood. As a citizen, I feel powerless because I have no control over the content I see. The present state of affairs could turn out to be quite dangerous for democracy. We really need to look for alternative solutions,” adds Vishnoi.
More informations on Theory EPFL and CNS EPFL.