Algorithms and Free Speech
In the last part of this series, I discussed Elon Musk’s approach to managing Twitter, in particular scrutinizing his self-proclaimed status as a ‘free-speech absolutist.’ Although he has vocally made it clear that he intends to make Twitter more transparent, even releasing controversial materials (known as the Twitter Files) to select journalists, his actions reveal other motivations.
His particular support for reinstating the accounts of various types of extremist users, and even complying with India’s censorship laws in order to ban a controversial documentary that criticizes Prime Minister Modi, it seems clear that his support for free-speech is conditional.
You may wonder why this is important. So what if he reinstates banned accounts? Isn’t that what social media is for? Shouldn’t we expect users to be able to filter out content that they are uninterested in on their own? In theory, yes. To an extent, it could be argued that reintroducing extremist content on the platform is merely supporting a marketplace of ideas. A marketplace of ideas is a theory of free speech that discourages censorship and supports a free flow of ideas. By supporting a platform for all, the idea is that people will have the opportunity to fully absorb all of the information necessary to then decide what they believe. In principle, this is a great philosophy to run a social media platform on. It gives users an opportunity to express themselves and engage with ideas and beliefs other than their own.
However, there has been a rise in the spread of disinformation online: meaning, false and/or misleading information with the intention of misinforming. Though it often is, disinformation does not necessarily need to be political, such as in the case of public health communication during the COVID-19 pandemic.
It could be argued that we should be able to decipher for ourselves that drinking bleach is probably not the cure to preventing a deadly virus. However, when algorithms come into the equation, it can be increasingly more difficult to sort out what is in fact the truth.
Algorithms are designed to keep users engaged. If you’ve ever wondered how your TikTok or Instagram feed is so perfectly tailored to your interests, it’s all thanks to an algorithm designed to keep you hooked and entertained. Realistically, it is in the best interest of platform owners to find ways to keep their users happy. And one of the best ways to do this is to use algorithms to provide an endless feed of content that is perfectly customized so that a user never has to engage with content they don’t like.
So why does this matter? Truthfully, I love that my feed can magically adjust itself to the latest fleeting hobby I develop. It saves me from having to do the tedious work of searching through an endless array of content that is available. I could spend hours on my Instagram feed and never get bored.
The problem lies in the echo chambers and filter bubbles that algorithms create. Both echo chambers and filter bubbles further pigeonhole users into only interacting with and consuming content that algorithms deem important to that specific user. Meaning, the more a user engages with right-wing content, their feed will continue to become increasingly saturated with similar content.
From a business standpoint, it makes sense to do what it takes to keep users interested and engaged. But from a freedom of speech perspective, these echo chambers and filter bubbles lead more vulnerable users down a dangerous path, as seen around the January 6th insurrection, the COVID-19 pandemic, and more.
With the mass amounts of information that are constantly available to us, it feels daunting to hold the responsibility of filtering through the nonsense and determining what is worth consuming. While there are users making noise and calling out extremist behavior online, social media platforms should be held accountable for allowing content that promotes harm; and for silencing users that are calling them out.