For better or worse: How AI can impact elections

Tuesday, September 10, 2024
Person holding a badge that says 'I Voted'

With about half the world’s population heading to the polls this year, AI election influence is a widely debated issue. We take a look at how Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) researchers are endeavoring to make AI a force for good amidst the commotion.

As an agnostic tool, much depends on how AI is used. Earlier this year, major technology firms including Google, Microsoft and Meta signed the AI Elections Accord, an initiative launched to combat the deceptive use of AI. “Out of an abundance of caution,” Google’s Gemini has since been blocked from answering any election questions for fears of disinformation or hallucinations.

There is a good reason why large-language models (LLMs) like Gemini and ChatGPT are under the microscope for generating accurate information. “According to a recent study, “fake news” travels six times faster than real news, and most of that happens in the first 10 minutes,” says Dilshod Azizov, MBZUAI natural language processing (NLP) masters graduate and research assistant. When it comes to fact-checking, every minute counts and humans are simply too slow. “For one article, it will take days just to assess it and annotate it,” he says.

Disinformation has long been a problem for democracy, but with AI, the issue is amplified in volume, quantity, and targeting. More than one in 10 interactions with news on Twitter/X and Facebook happened on stories from unreliable sites, according to independent news verifier NewsGuard. NewsGuard is also tracking over 1,000 sites creating AI-generated news with little to no human oversight.

Using machine learning techniques, deep learning algorithms, graph neural networks and LLMs, Azizov and other MBZUAI researchers aim to spot patterns that will improve the ways we verify news. An example includes their recent finding which suggests longer articles tend to be more factual, with accurate stories typically ranging from 1,000 to 1,500 words or more.

“The real problem isn’t just that some content is false, but rather that the news has become weaponized,” says Preslav Nakov, department chair and professor of NLP at MBZUAI. “But while LLMs can generate inaccurate text, they are also an effective tool to analyze it.”

News presentation and reporting can shape voting behavior, with a recent study suggesting political bias in news coverage can shift the undecided voter preference by at least 20%. Nakov’s latest work FRAPPE, which stands for Framing, Persuasion, and Propaganda Explorer holds significant potential for the way news is consumed and understood. The system employs machine learning algorithms and NLP techniques to identify persuasive techniques like simplification and manipulative wording. It also assesses framing techniques like editorial choices or image use to break down news articles into core components. Ultimately, the aim is to make it easier for users to grasp the underlying context and significance of what they are reading.

Under Nakov’s guidance, Azizov’s research also focuses on detecting the political bias of publications. Machine learning trained on datasets from AllSides and Media Bias Fact Check, analyzes news articles by examining language patterns, word choices, and framing.

Moreover, Azizov was interested in how minor word choices identified by machine learning affect political bias. Talking about social care, for example, a left-leaning source might use phrases like “crucial social programs” and “necessary support to the most vulnerable” to highlight positive aspects. In contrast, a right-leaning source might frame it negatively, with phrases like “overburdening hardworking taxpayers.” A centrist source would present the issue neutrally, stating, “the government is allocating more funds for social programs” without emotionally charged language.

Elsewhere, while generative AI like ChatGPT grabs headlines, microtargeting is a more insidious way AI is wielded in election run-ups. In India, political parties and consultants used machine learning to mine voters’ personal data, analyzing sensitive information like socioeconomic status to tailor messages that were most likely to influence potential supporters.

By building awareness about political bias, news factuality, and the impact of AI, the hope is voters will be more informed heading to the polls. But given widespread uptake of this technology is a while off, until then Azizov hopes people will “apply critical thinking.” Especially given how problems posed by AI are likely to continue well beyond the election results are announced.

Related

thumbnail
Wednesday, December 18, 2024

AI and the Arabic language: Preserving cultural heritage and enabling future discovery

The Arabic language is underrepresented in the digital world, making AI inaccessible for many of its 400.....

  1. atlas ,
  2. language ,
  3. Arabic LLM ,
  4. United Nations ,
  5. Arabic language ,
  6. jais ,
  7. llms ,
  8. large language models ,
Read More
thumbnail
Thursday, December 05, 2024

New resources for fact-checking LLMs presented at EMNLP

A team from MBZUAI created a fine-grained benchmark to analyze each step of the fact-checking process and.....

  1. research ,
  2. nlp ,
  3. llms ,
  4. EMNLP ,
  5. fact checking ,
Read More