Photo by Zac Wolff on Unsplash
Photo by Zac Wolff on Unsplash

Aussie experts' 100 fake health stories in an hour highlight AI risks

Embargoed until: Publicly released:
Peer-reviewed: This work was reviewed and scrutinised by relevant independent experts.

To highlight the risks AI poses in the spread of misinformation and disinformation, Aussie public health experts mass-produced more than 100 blog articles full of disinformation on vaccines and vaping in just over an hour using OpenAI’s GPT Playground. The blogs included fake patient and clinician testimonials, and fake scientific-looking references. They were also able to create 20 realistic images to go with the stories in less than two minutes. They also tried using Google’s Bard and Microsoft’s Bing Chat using the same prompts, but this failed. They say the alarming ease with which publicly available tools can be used for the mass generation of misleading health content stresses an immediate need for protective measures.

Journal/conference: JAMA Internal Medicine

Organisation/s: Flinders University

Funder: Mr Modi is supported by Postgraduate Scholarship APP2005294 from the NHMRC. Prof Sorich is supported by a Beat Cancer Research Fellowship from Cancer Council South Australia. Dr Hopkins is supported by an Emerging Leader Investigator Grant APP2008119 from the NHMRC. The PhD scholarship of Mr Menz is supported by The Beat Cancer Project and Cancer Council South Australia.

Media release

From: Flinders University

Medical researchers find AI fails pub test

Government and industry guardrails are urgently needed for Generative AI to protect the health and wellbeing of our communities, say Flinders University medical researchers who put the technology to the test and it failed.

Rapidly evolving Generative AI, the cutting-edge domain prized for its capacity to create text, images and video, was used in the study to test how false information about health and medical issues might be created and spread – and even the researchers were shocked with the results.

In the study the team attempted to create disinformation about vaping and vaccines using Generative AI tools for text, image and video creation.

In just over an hour, they produced over 100 misleading blogs, 20 deceptive images, and a convincing deep-fake video purporting health disinformation. Alarmingly, this video could be adapted into over 40 languages, amplifying its potential harm.

Bradley Menz, first author, registered pharmacist and Flinders University researcher says he has serious concerns about the findings, drawing upon prior examples of disinformation pandemics that have led to fear, confusion and harm.

“The implications of our findings are clear: society currently stands at the cusp of an AI revolution, yet in its implementation governments must enforce regulations to minimise the risk of malicious use of these tools to mislead the community,” says Mr Menz.

“Our study demonstrates how easy it is to use currently accessible AI tools to generate large volumes of coercive and targeted misleading content on critical health topics, complete with hundreds of fabricated clinician and patient testimonials and fake, yet convincing, attention-grabbing titles

“We propose that key pillars of pharmacovigilance – including transparency, surveillance and regulation – serve as valuable examples for managing these risks and safeguarding public health amidst the rapidly advancing AI technologies,” he says.

The research investigated OpenAI’s GPT Playground for its capacity to facilitate the generation of large volumes of health-related disinformation. Beyond large-language models, the team also explored publicly available generative AI platforms, like DALL-E 2 and HeyGen, for facilitating the production of image and video content.

Within OpenAI’s GPT Playground the researchers generated 102 distinct blog articles, containing more than 17,000 words of disinformation related to vaccines and vaping, in just 65 minutes. Further, within 5 minutes, using AI avatar technology and natural language processing the team generated a concerning deepfake video featuring a health professional promoting disinformation about vaccines. The video could easily be manipulated into over 40 different languages.

The investigations, beyond illustrating concerning scenarios, underscore an urgent need for robust AI vigilance. It also highlights important roles healthcare professionals can play in proactively minimising and monitoring risks related to misleading health information generated by artificial intelligence.

Dr Ashley Hopkins from the College of Medicine and Public Health and senior author says that there is a clear need for AI developers to collaborate with healthcare professionals to ensure that AI vigilance structures focus on public safety and well-being.

“We have proven that when the guardrails of AI tools are insufficient, the ability to rapidly generate diverse and large amounts of convincing disinformation is profound. Now there is an urgent need for transparent processes to monitor, report, and patch issues in AI tools,” says Dr Hopkins.

The paper - ‘Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance’ - by Bradley D.Menz, Natansh D. Modi, Michael J.Sorich and Ashely M.Hopkins was published in JAMA Intern Med.

Author affiliations: Discipline of Clinical Pharmacology, College of Medicine and Public Health, Flinders University.

Attachments:

Note: Not all attachments are visible to the general public

  • JAMA
    Web page
    Please link to the article in online versions of your report (the URL will go live after the embargo ends).

News for:

Australia
SA

Media contact details for this story are only visible to registered journalists.