Nearly half of all AI assistants give false or misleading news answers, a new international study has revealed. The research, conducted by the European Broadcasting Union (EBU) and the BBC, examined how popular digital assistants like ChatGPT, Copilot, Gemini and Perplexity handle news questions in everyday use.
The study analyzed around 3,000 responses across 14 languages, checking each for accuracy, proper sourcing and the ability to tell facts from opinions. The findings showed that 45% of AI-generated replies contained serious mistakes, while 81% had at least one problem.
A third of all replies had sourcing issues such as missing, misleading or wrong references. Gemini, Google’s AI tool, showed the highest rate of sourcing errors at 72%, while other assistants stayed below 25%.
About 20% of all responses contained outdated or incorrect information. Examples included wrong claims about vaping laws and reporting on Pope Francis months after his death.
The report involved 22 media organizations from 18 countries, including the UK, Germany, France, Spain, Ukraine and the US. Researchers warned that as AI assistants become more popular than search engines for news, public trust in information could decline.
READ MORE: Pakistan to Unveil First National Esports Policy
EBU’s Media Director Jean Philip De Tender stated that misinformation from AI tools can damage democracy: “When people don’t know what to trust, they end up trusting nothing at all.”
The report urged tech companies to take responsibility and improve how their AI systems handle news content.
 
 
 
 
 


