Did You Know?

AI Found to Mislead Voters and Change Election Outcomes

Please share our story!

Two major studies published by Science and Nature found a shocking portion of voters were persuaded by AI chatbots, shifting opinions by an enormous 15 percentage points in some elections. Over 15 AI models were tested on 80,000 participants in the UK, US, Canada, and Poland, and chatbots were found to be 50% more persuasive than traditional campaign ads. 

The problem here is not just that AI is shifting public opinion. A fifth of claims fed to users were rated predominantly inaccurate. So, pairing the automated models’ misinformation, their sheer persuasiveness, and the fact that 44% of US adults are already using tools like ChatGPT regularly, we must ponder some key points: how does this shape future elections, how can we control deliberate misinformation, and what if unscrupulous actors find a way to exploit these systems? 

AI Models Mislead Voters Change Election Outcomes Are They Deliberately Inaccurate

How Effective is Political Persuasion by AI?

Shifting voters’ preferences by between two and fifteen percentage points is an effect so large it could flip almost any modern election. These models don’t rely on emotional manipulation. Instead, they bombard users with confident streams of information – some accurate, but much of it not. In fact, the most persuasive systems were also the least truthful.  

In the US, when Trump supporters engaged with a pro-Harris chatbot, their support shifted 3.9 percentage points towards Harris. Harris supporters exposed to a pro-Trump model moved 2.3 points in the opposite direction. These swings are extraordinary in magnitude considering that most political advertisements and campaigns achieve far less than a one-point shift – often statistically indistinguishable from zero.  

The 3.9-point shift in favour of Harris is four times greater than the measured effect of all political ads during the 2016 and 2020 elections. 

Persuasive and Inaccurate: An Alarming Combination

In the UK, 19 large language models (LLMs) were deployed to nearly 77,000 participants on over 700 political issues. The study found that the most effective way to enhance the persuasiveness of models was to instruct them to include facts and evidence in their arguments, providing additional training using examples of persuasive conversations. The most persuasive model shifted participants who initially disagreed with a political statement by a remarkable 26.1 points toward agreeing.  

However, the most concerning finding is not just the persuasiveness. As models became more influential on a user’s opinion, it increasingly provided misleading or simply false information. This raises concerns about the potential consequences for democracy: political campaigns employing AI chatbots could shape public opinion with totally inaccurate information, compromising voters’ ability to make independent political judgements. 

Breitbart’s Social Media Director Wynton Hall concluded:  
We’ve long known that LLMs are not neutral and overwhelmingly exhibit a left-leaning political bias. What this study confirms is that AI chatbots are also uniquely adept as political persuasion machines, and are willing to hallucinate misinformation if that’s what it takes to sway human minds. When you combine bias, AI hallucinations, and Ciceronian-style persuasiveness, that is clearly a wakeup call for conservatives heading into the midterm and presidential elections” 

How Chatbots Change Minds So Easily

Unlike one-way communications like ads, tweets, flyers, and campaign speeches, chatbots operate inside a person’s cognitive space. They can answer questions instantly, challenge assumptions, remove ambiguity, and perhaps most crucially can generate tailored reasoning on demand. The conversational format of LLMs like ChatGPT was found to be 41%-52% more persuasive than static AI messages. It functions similarly to door-to-door canvassers, except with perfect recall, infinite patience, a limitless reservoir of arguments, and the ability to talk to millions of people at the same time. 

An interesting takeaway from the studies was that the most effective AI models do not rely on emotional manipulation, but simply aim to overload users with information. Chatbots produce vast quantities of factual – or seemingly factual – detail, instantly assembling thorough policy arguments that most humans can’t contest in real time. The more information they supplied, the more effective they were at changing opinions. Volume, rather than sentiment, is the real weapon. 

The effect of sticking to facts was strikingly demonstrated in the Polish portion of the study. When researchers stripped facts out of chatbots’ arguments and instead relied on emotional appeals, storytelling and personalities, their persuasive power collapsed by 78%. This finding suggests humans are more vulnerable to information density than to psychological manipulation. 

The Big Problem with AI Overtaking Traditional Tactics

Political operatives have worked for decades to improve political persuasion through microtargeting, emotional framing, and careful narrative crafting. In one stroke, AI has surpassed all of these by leveraging something primal: our instinct to trust confident, well-structured explanations. Voters, more than being hypnotised, are being overwhelmed. And it’s not always accurate. 

The more persuasive a chatbot is, the less accurate it becomes. Across experiments, roughly 19% of the claims generated by chatbots were “predominantly inaccurate”. Interestingly, the larger, more advanced models often performed worse on factual correctness than earlier or smaller versions. Contrary to expectations, accuracy seems to be getting worse as models become more capable. 

Wait, Is AI Inaccurate On Purpose?

The accuracy trade-off is a feature, not a bug. When tuned for persuasion, AI models optimise for rhetoric success and engagement – producing arguments that appear compelling instead of those that are absolutely true. Analysis highlights that models advocating for conservative, right-wing candidates produced more inaccuracies than left-wing models, raising fears about ideological skews embedded deep within the training data. 

If tomorrow’s elections are shaped by AI-generated arguments, then those arguments are expected to be incredibly persuasive but only partially factual. Democracies have never faced anything like this: a political actor capable of producing infinite, customised information to voters, disguised as expert analysis. 

Impact on Electoral Integrity: Threat or Transformation?

To many, AI chatbots represent an existential threat to electoral integrity, and those concerns are now grounded in evidence. A system capable of swinging opinions of millions of voters by huge margins – often through inaccurate claims – opens the door to invisible manipulation campaigns. A malicious foreign or domestic actor could quietly deploy thousands of such agents, nudging votes in battleground districts or swaying national opinion in their favour. 

Supporters, however, counter that chatbots merely deliver detailed policy arguments that voters don’t currently encounter through conventional means. If AI can raise the informational quality of its political discourse, then persuasion would be more like empowerment than manipulation. It’s a hopeful interpretation, but one that assumes accuracy will improve rather than decline – a trend we are not currently seeing.

Final Thought

The emergence of AI as a political actor is no longer a hypothetical threat. The recent Science and Nature studies document the existence of technology that can measurably shift voting intentions using arguments that voters trust to be true, even when they are not. The machinery of democratic persuasion has now moved from TV screens and campaign mailers into private conversations with models that cannot be fully trusted and whose true intentions may never be known. A familiar concept, perhaps, but now operating at a scale and with an effectiveness mankind has never seen before. 

Your Government & Big Tech organisations
try to silence & shut down The Expose.

So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.

The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.

Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.

Please choose your preferred method below to show your support.

Stay Updated!

Stay connected with News updates by Email

Loading


Please share our story!
author avatar
g.calder
I’m George Calder — a lifelong truth-seeker, data enthusiast, and unapologetic question-asker. I’ve spent the better part of two decades digging through documents, decoding statistics, and challenging narratives that don’t hold up under scrutiny. My writing isn’t about opinion — it’s about evidence, logic, and clarity. If it can’t be backed up, it doesn’t belong in the story. Before joining Expose News, I worked in academic research and policy analysis, which taught me one thing: the truth is rarely loud, but it’s always there — if you know where to look. I write because the public deserves more than headlines. You deserve context, transparency, and the freedom to think critically. Whether I’m unpacking a government report, analysing medical data, or exposing media bias, my goal is simple: cut through the noise and deliver the facts. When I’m not writing, you’ll find me hiking, reading obscure history books, or experimenting with recipes that never quite turn out right.
5 1 vote
Article Rating
Subscribe
Notify of
guest
2 Comments
Inline Feedbacks
View all comments
Petra
Petra
22 minutes ago

I would like to see the influence on both those at the left and those at the right.

Petra
Petra
18 minutes ago

@Admin:

Please notice that login in to this site isn’t as fluent as it used to be. Reloading the page often helps, but it used to work better.