Algorithms are moulding and shaping our politics. Here’s how to avoid being gamed

The political content in our personal feeds sometimes “alternative”, realities, says the writer. FILE PHOTO: Manjunath Kiran/AFP/Getty Images

The political content in our personal feeds sometimes “alternative”, realities, says the writer. FILE PHOTO: Manjunath Kiran/AFP/Getty Images

Published Mar 26, 2023

Share

In 2016, evidence began to mount that then-South African president Jacob Zuma and a family of Indian-born businessmen, the Guptas, were responsible for widespread “state capture”.

It was alleged that the Gupta family influenced Zuma’s political appointments and benefited unfairly from lucrative tenders.

The Guptas began to look for a way to divert attention from them. They enlisted the help of British public relations firm Bell Pottinger, which drew on the country’s racial and economic tension to develop a social media campaign centred on the role of “white monopoly capital” in continuing “economic apartheid”.

The campaign was driven by the power of algorithms.

The company created more than 100 fake Twitter bots or automated Twitter accounts that run on bot software – computer programs designed to perform tasks and actions, ranging from rather simple ones to complex ones; in this case, to simulate human responses for liking and retweeting tweets.

This weaponisation of communications is not limited to South Africa.

Examples from elsewhere in Africa abound, including Russia currying favour in Burkina Faso via Facebook and co-ordinated Twitter campaigns by factions representing opposing Kenyan politicians.

It’s seen beyond the continent, too – in March 2023, researchers identified a network of thousands of fake Twitter accounts created to support former US president Donald Trump.

Legal scholar Antoinette Rouvroy calls this “algorithmic governmentality”. It’s the reduction of government to algorithmic processes as if society is a problem of big data sets rather than one of how collective life is (or should be) arranged and managed by the individuals in that society.

In a recent paper, I coined the term “algopopulism”: algorithmically aided politics. The political content in our personal feeds not only represents the world and politics to us. It creates new, sometimes “alternative”, realities. It changes how we encounter and understand politics and even how we understand reality itself.

One reason algopopulism spreads so effectively is that it’s difficult to know exactly how our perceptions are being shaped. This is deliberate. Algorithms are designed in a sophisticated way to override human reasoning.

What can you do to protect yourself from being “gamed” by algorithmic processes? The answers, I suggest, lie in understanding a bit more about the digital shift that’s brought us to this point and the ideas of a British statistician, Thomas Bayes, who lived more than 300 years ago.

How the shift happened

Five recent developments in the technology space have led to algorithmic governmentality: considerable improvements in hardware; generous, flexible storage via the cloud; the explosion of data and data accumulation; the development of deep convoluted networks and sophisticated algorithms to sort through the extracted data; and the development of fast, cheap networks to transfer data.

Together, the developments have transformed data science into something more than a mere technological tool. It has become a method for using data not only to predict how you engage with digital media, but to pre-empt your actions and thoughts.

This is not to say that all digital technology is harmful. Rather, I want to point out one of its greatest risks: we are all susceptible to having our thoughts shaped by algorithms, sometimes in ways that can have real-world effects, such as when they affect democratic elections.

Bayesian statistics

That’s where Thomas Bayes comes in. Bayes was an English statistician; Bayesian statistics, the dominant paradigm in machine learning, is named after him.

Before Bayes, computational processes relied on frequentist statistics. Most people have encountered the method in one way or another, as in the case of how probable it is that a coin will land heads-up and tails-down. This approach starts from the assumption that the coin is fair and hasn’t been tampered with. This is called a null hypothesis.

Bayesian statistics does not require a null hypothesis; it changes the kinds of questions asked about probability entirely. Instead of assuming a coin is fair and measuring the probability of heads or tails, it asks us instead to consider whether the system for measuring probability is fair. Instead of assuming the truth of a null hypothesis, Bayesian inference starts with a measure of subjective belief which it updates as more evidence – or data – is gathered in real time.

How does this play out via algorithms? Let’s say you heard a rumour that the world is flat and you do a Google search for articles that affirm this view. Based on this search, the measure of subjective belief the algorithms have to work with is “the world is flat”. Gradually, the algorithms will curate your feed to show you articles that confirm this belief, unless you have purposefully searched for opposing views too.

That’s because Bayesian approaches use prior distributions, knowledge or beliefs as a starting point of probability. Unless you change your prior distributions, the algorithm will continue providing evidence to confirm your initial measure of subjective belief.

But how can you know to change your priors if your priors are being confirmed by your search results all the time? This is the dilemma of algopopulism: Bayesian probability allows algorithms to create sophisticated filter bubbles that are difficult to discount because all your search results are based on your previous searches.

There is no longer a uniform version of reality presented to a specific population, like there was when TV news was broadcast to everyone in a nation at the same time. Instead, we each have a version of reality. Some of this overlaps with what others see and hear and some doesn’t.

Engaging differently online

Understanding this can change how you search online and engage with knowledge.

To avoid filter bubbles, always search for opposing views. If you haven’t done this from the start, do a search on a private browser and compare the results you get.

More importantly, check your personal investment. What do you get out of taking a specific stance on a subject? For example, does it make you feel part of something meaningful because you lack real-life social bonds?

Finally, endeavour to choose reliable sources. Be aware of a source’s bias from the start and avoid anonymously published content.

In these ways we can all be custodians of our individual and collective behaviour.

* Chantelle Gray is a professor in the School of Philosophy, North-West University.

** This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Related Topics:

techpolitics