Here's Why You Shouldn't Trust News Summaries From AI Chatbots (With One in Particular)
1 min read
Summary
A new report by the BBC has found that AI chatbots produce summaries containing errors around 51% of the time, with 19% of those containing factual mistakes.
Google’s Gemini platform performed worst, with 60% of summaries containing errors, while ChatGPT and Perplexity were close behind, each responsible for around 40% errors.
The chatbots were unable to accurately differentiate between fact and opinion and often failed to include important context, resulting in misleading responses.
This comes as Apple recently faced criticism for the same issues with its notification summaries and turned them off.
Until AI can improve significantly, it’s best that consumers read news summaries critically or create their own.