17.8 C
London
Monday, October 7, 2024
HomeBusinessTechnologyHow Generative AI Will Affect Election Misinformation in 2024

How Generative AI Will Affect Election Misinformation in 2024

Date:

Related stories

Man, 47, jailed for FGM in Nottingham

The UK is making strides in sending perpetrators of...

Vumbi Ni Hatari Katika Afya yetu?

Vumbi ni hali ya chembe zinazopeperuka hewani za asili...

Fahamu Zaidi Alama Za Barabarani

Alamu za barabarani huweza kuwa nguzo muhumu kwenye kando...

Hofu Yatanda Kufuatia Mlipuko Wa Mpox

Ugonjwa wa Mpox husababishwa na virusi vya "Monkey Pox"...

Je, Kamusi Inatofautiana Vipi Na Vitabu Vingine?

Shuleni ni mahali ambapo wanafunzi huenda ili kupata elimu....
spot_imgspot_img
Reading Time: 6 minutes

By Linda Raftree

generative AI elections disinformation

Generative AI models like DALL-E and ChatGPT showcase the creativity and utility of AI. Yet these tools – which can generate realistic image, audio and video simulations at scale and then personalize this content – have serious implications for how much people trust the information they see and hear.

Civil society organizations are highly concerned about how these tools might affect the 80+ elections happening around the world in 2024. In addition, there is looming concern that the harmful effects of GenAI will disproportionately affect people and groups who are already at risk due to their gender, ethnicity, sexual orientation, profession, or social group identities.

At our December 4th Technology Salon we delved into GenAI’s role in both fueling and addressing mis- and disinformation.

Spearheading the discussion were:

The Rockefeller Foundation graciously hosted this discussion. Below is a summary of the key points we discussed.

The expanding universe of GenAI harms

GenAI enables harm across video, audio, text modalities. The top concerns are:

  • Nonconsensual pornography and child sexual abuse material (CSAM), which are the most common types of harmful synthetic media
  • Proliferation of spam and low-quality websites that give harmful advice (like unproven, fake cancer treatment) which is leading to a degraded search ecosystem
  • Targeted scams and spear phishing, which are becoming more sophisticated – for example, someone can scrape your voice from social media and use it to call your grandma and ask her for money
  • The nature of disinformation is shifting – there is less purely false information and more discontextualized and misleading information — making it harder to catch and debunk; global consultations conducted by WITNESS found that some 80% of mis- and disinformation was mis- or dis-contextualization
  • Harassment and extremist propaganda is less common right now, but it’s coming
  • Biases and discrimination inherent in the AI models remain problematic.

GenAI is transforming how disinformation is created

Harmful content can be created at scale using GenAI with alarming ease. Ethnographic research with disinformation creators in Brazil and the US revealed that GenAI acts as a productivity tool, increasing the volume and sophistication of content. Amateurs are more easily entering the game, and there’s a significant move towards exploiting social media for financial gain. Sophisticated misinformation creators are very capable of scaling and using AI to translate themselves into other languages to bring their content to countries and people they couldn’t reach before.

The effectiveness of detection technologies that could be used against deceptive GenAI tech is questionable. The consensus across tech platforms is largely that detection is a losing battle and that information overload will only get worse. Already humans are largely unable to pass the Turing Test (discerning between machine outputs and human ones) for existing GenAI content. Our abilities are statistically about the same as a coin toss.

Detection dilemmas

Sophisticated watermarking techniques like synthID and Adobe Firefly aren’t widely used by average online users. Humans tend to revert to existing trust heuristics (likes and comments) rather than look for watermarking or metadata to understand the veracity of an image.

We need to develop new trust heuristics to accompany us in the era of GenAI. The emphasis is shifting from technical fixes towards classic media literacy and critical thinking to combat GenAI deception. It’s an arms race, or whack-a-mole situation — as soon as technologies and techniques are developed to combat disinformation, creators have found new ways of escaping detection.

Loss of shared truth

Mis- and disinformation affect us psychologically. When anything could be fake, how do people know who and what to trust? How can we work to elevate voices that are trustworthy? How will this affect children and young people and the future of public discourse?

As proxies for what is ‘true’ become more difficult to identity, people may retreat further into eco chambers, only trusting and listening to the people they already know. Alongside this is an increasing fragmentation of the web and a reduction in the public commons as a space for ideas.

Objective truth is a difficult concept, however. People tend to believe what they want to believe and cherry pick ‘truth’ based on their existing beliefs and worldviews, without actually placing too much value in whether something is ‘objectively’ true or not.

People may choose to share media that they feel is ‘within character’ of someone while fully knowing it is fake or misleading. Those who are unconcerned about ‘fake news’ are less likely to check watermarks, metadata, or install browser extensions that rate the trustworthiness of news sources, as one Salon participant said.

Elections will test the information ecosystem

2024 will be a test for the whole information ecosystem. Two billion people will go to the polls in some 80 countries. Already there are emerging uses of AI — such as audio content of Modi’s voice in India – that are raising big concerns about what kinds of disinformation will affect election processes.

The proliferation of synthetic audio in particular — and lack of tools for watermarking it — are especially challenging. To detect fake or altered audio, “you have to know the person and how they speak, it’s not as easy as seeing extra fingers or noticing video glitches.” There are fewer experts and reliable detection tools for audio, especially for languages other than English.

Alongside GenAI, there are two other things that will leave us all stumbling in the dark to understand mis- and disinformation during and after the 2024 elections, said one discussant.

First, the reduction and elimination of Trust and Safety teams across most platforms (YouTube is down to one policy person. Facebook is no longer supporting fact checkers. Twitter fired most of its Trust and Safety staff.) leaves platforms in a weak position to combat mis- and disinformation.

Second, the attack by Republication politicians on mis- and disinformation research – which is seen as partisan and left-leaning – has created a chilling effect for researchers, platforms, and also among funders, who have found themselves in the firing line of the right wing. This means we will likely know less about mis- and disinformation on platforms than before.

Are there any bright spots?

As the world’s largest online encyclopedia, Wikipedia plays an essential role in training almost every large language model. Almost every LLM that is currently out was trained on Wikipedia data, and it is almost always the largest source of training data in LLM data sets. The platform plays an outsized role in the information ecosystem with respect to the development of AI and is working to improve the health of this ecosystem.

While GenAI tools can synthesize or summarize existing knowledge, they cannot engage in discussions and conversations to create content in the way that Wiki communities do. As machine generated content becomes more pervasive, human- and community-created content becomes more valuable as it can serve as a reliable source.

GenAI tools however need to improve how they recognize and provide attribution to the human contributions that they are built on. While LLMs such as Google Bard and BingGPT are already starting to provide citations in their outputs, there is an urgent need to standardize this approach across all LLMs. Wikipedia can be an antidote to disinformation, and AI can help in this sense.

One creative idea of using GenAI in the fight against mis and disinformation is to use it to surface and amplify positive societal impact. Could AI tools be trained to identify constructive conversations and moderate borderline discriminatory or manipulative speech?

Another useful approach has been ‘prebunking’ mis- and disinformation. This approach relies on the foundational, universal human instinct to not want to be manipulated. Researchers have found that a 3-part formula is effective:

  1. Letting people know they are going to see content that’s trying to manipulate them.
  2. Showing them a little micro-dose.
  3. Then giving them the counterarguments to show the type of mis- or disinformation being used.

This can be done in as little as 30 seconds through an ad and has shown promising results. LLMs can help researchers to identify complex attributes of rhetorical techniques to contribute to the creation of prebunking messaging.

Who is responsible for improving trust?

While GenAI is new, the problems that we are discussing are not, as some participants noted. The burden for detecting what is real and what is misleading or fake is being put on individuals as opposed to being addressed through global governance of AI. Responsibility should lie across the entire information pipeline.

One discussant mentioned that the Partnership for AI offers an ethical framework on how to disclose synthetic media — from the level of creator, to tech company, to distribution platform. All have a responsibility to disclose. Training for journalists, fact checkers and individuals is an important part of the equation.

Civil society and the social sector should continue putting efforts into supporting work and trusted partners and encouraging human rights-based risk assessments on GenAI. Labeling and disclosure are also important, according to one lead discussant. While it is (and will probably remain) a cat-and-mouse game, it is important to have forensics and ways to detect fake media.

Media literacy – updated for the current realities of how mis- and disinformation are created and spread – needs more investment and work as well. The role of humor, art and artists is underappreciated and should be bolstered, given that artists can call attention to social, cultural and other angles of GenAI and information integrity in our societies and help people to see and understand with different parts of their brains.

What does the future hold?

The future won’t be full Armageddon, noted our lead discussants, but the lack of funding for researchers and Trust and Safety is hugely concerning, especially when it comes to elections and democracy. Trust and Safety teams have disappeared in a time of increasing uncertainty with potentially huge implications. We’ll be largely unable to see or detect what is happening and communities and societies will be harmed by this.

We should avoid being distracted by the overblown existential threats of AI and keep our eyes on the ball to address these real and highly consequential threats of GenAI. Elections could be an incentive for drawing attention to these issues, and there are plenty of people who are interested in continuing with these discussions and this work!

Source, ictworks.org.

About The Author

Joseph Wambua
Joseph Wambuahttp://mojatu.com
I am a dynamic professional currently serving as the Youth Media Manager at Youth Future Lab. With a solid foundation in finance and IT, I am certified by Coursera in IT Support Fundamentals and by Alison in ISO 9001:2015 - Quality Management System. Additionally, I am a certified fact-checker. Passionate about personal and professional development, I am dedicated to using my expertise to enhance the skills of others while continuously seeking new ideas and knowledge to further my own growth. My commitment to excellence and quality management makes me a valuable asset to any team.

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_imgspot_img