News release
From:
JAMA
Vulnerability of Large Language Models to Prompt Injection When Providing Medical Advice
About The Study: In this quality improvement study using a controlled simulation, commercial large language models (LLM’s) demonstrated substantial vulnerability to prompt-injection attacks (i.e., maliciously crafted inputs that manipulate an LLM’s behavior) that could generate clinically dangerous recommendations; even flagship models with advanced safety mechanisms showed high susceptibility. These findings underscore the need for adversarial robustness testing, system-level safeguards, and regulatory oversight before clinical deployment.
Attachments
Note: Not all attachments are visible to the general public.
Research URLs will go live after the embargo ends.
Research
JAMA, Web page
The URL will go live after the embargo ends
Journal/
conference:
JAMA Network Open
Organisation/s:
University of Ulsan, South Korea
Funder:
This work was supported by the National Research Foundation of Korea grant funded by the
Korea government (Ministry of Science and Technology; grant No. RS-2024-00392315). This research was also
supported by a grant from the Ministry of Food and Drug Safety in 2025 (grant No. RS-2025-02213013), which
funded the subcontracted Frontier Medical AI Red Team Test Operation System Establishment project
(EA20251471) through the Electronics and Telecommunications Research Institute.