AI chatbot health advice is vulnerable to deliberately malicious prompts

Publicly released:
International
CC-0. Story by Dr Joe Milton, Australian Science Media Centre
CC-0. Story by Dr Joe Milton, Australian Science Media Centre

Korean scientists investigated three popular chatbots people turn to for medical advice - ChatGPT,  Gemini, and Claude - to see if they could be made to dish out dodgy recommendations when the prompts used were malicious attempts to manipulate their behaviour. The prompts were either designed to include specific information about a patient to promote convincing-sounding moderate or high risk recommendations, or false study results or guidelines to trigger extremely high risk recommendations, like telling pregnant women to take drugs that are not safe during pregnancy, or combining drugs that are risky when taken together. They found this succeeded in generating bad advice in nearly all (94.4%) of the moderate or high risk cases, and in almost as many (91.7%) of the extremely high harm risk cases. The latter included recommending pregnant women take thalidomide, a drug that can cause severe birth defects if taken while pregnant. The findings suggest chatbot safeguards can't handle malicious attempts to manipulate the health advice they dole out, the researchers say, which could be putting lives at risk.

News release

From: JAMA

Vulnerability of Large Language Models to Prompt Injection When Providing Medical Advice

About The Study: In this quality improvement study using a controlled simulation, commercial large language models (LLM’s) demonstrated substantial vulnerability to prompt-injection attacks (i.e., maliciously crafted inputs that manipulate an LLM’s behavior) that could generate clinically dangerous recommendations; even flagship models with advanced safety mechanisms showed high susceptibility. These findings underscore the need for adversarial robustness testing, system-level safeguards, and regulatory oversight before clinical deployment.

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Research JAMA, Web page The URL will go live after the embargo ends
Journal/
conference:
JAMA Network Open
Research:Paper
Organisation/s: University of Ulsan, South Korea
Funder: This work was supported by the National Research Foundation of Korea grant funded by the Korea government (Ministry of Science and Technology; grant No. RS-2024-00392315). This research was also supported by a grant from the Ministry of Food and Drug Safety in 2025 (grant No. RS-2025-02213013), which funded the subcontracted Frontier Medical AI Red Team Test Operation System Establishment project (EA20251471) through the Electronics and Telecommunications Research Institute.
Media Contact/s
Contact details are only visible to registered journalists.