'Chatbots' generate health misinformation in 88% of their responses

© ShutterStock

The study, "Assessing System Instruction Vulnerabilities of Large Language Models for Malicious Conversion into Health Misinformation Chatbots," concluded that AI models "can be easily programmed to deliver false health and medical information," and 88% of the responses were misinformation.
The evaluation of the five main models developed by OpenAI, Google, Anthropic, Meta and X aimed to determine how they can be programmed to operate as health misinformation chatbots.
In total, 88% of all AI-generated responses were false, despite featuring scientific terminology, a formal tone, and fabricated references that made the information appear legitimate.
"The misinformation included claims about vaccines causing autism, diets curing cancer, HIV being transmitted through the air, and 5G causing infertility," are some of the cases of misinformation reported.
The study revealed that of the AI models evaluated, four generated misinformation in 100% of their responses, while a fifth generated misinformation in 40% of the results.
"Artificial intelligence is now deeply embedded in the way health information is accessed and delivered (...) millions of people are turning to AI tools for guidance on health-related issues," the document reads.
Furthermore, the study warns that these technological systems can easily be manipulated to produce false or misleading advice, creating "a powerful new avenue for disinformation that is harder to detect, harder to regulate, and more persuasive."
Recently, speaking to the Lusa news agency, Carlos Cortes, president of the Portuguese Medical Association, stated that AI cannot replace a doctor, warning that tools like ChatGPT are not capable of making medical diagnoses.
Carlos Cortes noted that AI tools are not capable of making diagnoses because they lack "robust scientific evidence, rigorous validation mechanisms, and, above all, algorithmic transparency that allows doctors to understand and trust the suggested decisions."
The study "Assessing the system instruction vulnerabilities of large language models for malicious conversion into health misinformation chatbots" was developed in collaboration with the University of South Australia, Flinders University, Harvard Medical School, University College London and Warsaw University of Technology.
noticias ao minuto