Summary

  • Research conducted by a number of universities has found that large language models (LLM) from AI company OpenAI were able to present significantly more persuasive arguments than humans.
  • This reflected an ability to use personal information for tailored debate, with those being persuaded more likely to believe they were being persuaded by AI as opposed to a human.
  • While the research suggests that LLM’s could play a role in stemming the spread of disinformation campaigns, the authors have warned that policymakers and online platforms should be aware of the potential for AI tools to craft and disseminate persuasive arguments, and to possibly manipulatively manipulate behaviour and thought at scale.
  • “Policymakers and online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns,” warns Riccardo Gallotti from the Fondazione Bruno Kessler.

By Rhiannon Williams

Original Article