By George Ogola
The information landscape in Africa – as elsewhere in the world – has expanded exponentially over the last decade. The proliferation of platform media, including Facebook, Twitter and YouTube, has been instrumental in this expansion. This has created important new debating spaces.
These platforms have now become essential for political campaigns across the continent. In Kenya, for example, social media has turned into a powerful new battleground in electoral politics.
Traditionally, political debates have been shaped by mainstream media. However, over the years, public trust in these media has waned. The country’s mainstream media remains strongly wedded to factional ethnic and class interests. This has increasingly undermined its capacity to facilitate fair and open debate. This is particularly true during elections.
Read more: Journalism has changed. Education must reflect the reality
Social media platforms have exploited this trust deficit, acting as important alternative sites for political deliberation. However, they have also become powerful tools for disinformation and misinformation.
According to a recent report by the Mozilla Foundation, which campaigns for an open and accessible internet, there is now a relatively well-established disinformation industry in Kenya. It is largely driven by social media influencers.
Over the last 10 years, I have carried out research on the interface between digital technologies and politics in Kenya. The Mozilla report demonstrates what I’ve witnessed – the evolution of the political role of some of the country’s digital spaces.
There is no evidence that disinformation and misinformation practices can on their own influence the outcome of elections. Nevertheless, they pose a danger to democratic processes.
They also poison an important space in which deliberative politics should take place. In politically charged environments, such as Kenya’s, they have the capacity to exploit long-held divisions with the potential to trigger violence.
Manipulation tools
The Mozilla Foundation report notes that social media influencers are capable of manipulating Twitter’s algorithms to determine trending topics. This is significant because such topics tend to shape editorial agendas outside the online platform.
The report identifies the use of a combination of methods to facilitate manipulation. One is the use of sock puppet accounts – multiple accounts controlled by the same user. Another is astroturfing. This is the practice of masking the sponsors of online messages so that they appear organic.
Using these kinds of tools, social media influencers can counter any negative stories about the people who are paying them – or malign opponents.
The Mozilla report cites the online reaction to the Pandora Papers leaks. This was an investigative report by the International Consortium of Investigative Journalists that exposed a “shadow financial system that benefits the world’s most rich and powerful”.
The papers revealed how powerful individuals, including the family of Kenya’s President Uhuru Kenyatta, were using tax havens and secrecy jurisdictions to avoid public scrutiny of their assets. The authors of the Mozilla report uncovered a sophisticated strategy to counter the largely incriminating evidence against the president’s family. It involved astroturfing and the use of hashtags such as #phoneyleaks and #offshoreaccountsfacts.
Read more: Social media users in Kenya and South Africa trust science, but still share COVID-19 hoaxes
Disinformation and misinformation practices, especially at election time in Kenya, aren’t new. But platform media provide easier and faster ways of fabricating information and distributing it at scale. Those involved are doing so with little fear due to the platforms’ ability to enable anonymity and pseudonimity.
The rise of these practices was evident in Kenya’s 2017 elections, attracting both local and international actors. An example was the infamous ‘Cambridge Analytica’ case. This involved massive data manipulation done through the deliberate posting of fake news.
There is evidence that these practices are on the rise for the upcoming poll scheduled for August 2022.
Why solutions are hard to come by
Disinformation and misinformation practices involve big tech companies, governments and the public. Their interests and priorities don’t always converge. This makes policy responses particularly challenging.
In addition, many governments are failing to act because of conflicting demands. On the one hand, they need to protect the public from perceived harmful information. On the other, they need to protect citizens’ rights to information and freedom of expression.
Read more: Punitive laws are failing to curb misinformation in Africa. Time for a rethink
It gets even more complicated in countries such as Kenya where the state, as well as extensions of the state, are actively involved in misinformation and disinformation campaigns.
In Kenya, media owners are typically the beneficiaries of a licensing regime that rewards supporters of the government. In most cases, these politicians are keen to use their media for political mobilisation, sometimes through misinformation and disinformation campaigns. This can involve politicians actively undermining potentially effective policy responses that don’t suit their interests.
Another major problem is that social media influencers have a financial incentive to participate in disinformation practices. Political players are spending large amounts of money online to popularise their candidature and undermine opponents. These online platforms offer immediacy and scale.
Still, some policy responses from Canada and Sweden could form the basis for the development of local solutions.
Canada has taken the problem out of the state’s hands. It has done this by creating a nonpartisan panel tasked with decisions on disinformation practices. In Sweden, intelligence agencies work with journalists to address disinformation. As Chris Tenove, a research fellow at the University of British Columbia, puts it:
This uses the insights of intelligence agencies but leaves public communication up to independent journalists.
These may not necessarily be the panacea to these practices. However, they offer a good starting point from which relevant context-specific responses may be developed.