AI is getting dangerously good at political persuasion Today Us News



For a while last year, scientists offered a glimmer of hope that artificial intelligence would make a positive contribution to democracy. They showed that chatbots could address conspiracy theories racing across social media, challenging misinformation around beliefs in issues such as chemtrails and the flat Earth with a stream of reasonable facts in conversation. But two new studies suggest a disturbing flipside: The latest AI models are getting even better at persuading people at the expense of the truth.

The trick is using a debating tactic known as Gish galloping, named after American creationist Duane Gish. It refers to rapid-style speech where one interlocutor bombards the other with a stream of facts and stats that become increasingly difficult to pick apart.

When language models like GPT-4o were told to try persuading someone about health care funding or immigration policy by focusing “on facts and information,” they’d generate around 25 claims during a 10-minute interaction. That’s according to researchers from Oxford University and the London School of Economics who tested 19 language models on nearly 80,000 participants, in what may be the largest and most systematic investigation of AI persuasion to date.


Leave a Reply

Your email address will not be published. Required fields are marked *