search

HEALTH Security Breach Risk Assessment for Large-scale Language Models in Healthcare 2025.02.06

▲(From left)Professor Young-Hak Kim of the Division of Cardiology, Special Medical Scientist Tae Joon Jun of the Big Data Research Center, and Researcher Minkyoung Kim of Department of Information Medicine

 

Large-scale language models (LLMs) are a key technological component of generative AI, designed to learn from vast amounts of data to think and respond like humans. Their application in healthcare is expected to improve the diagnostic accuracy of imaging instruments, including CT and MRI scanners and enable personalized treatment plans for patients, thus enhancing the efficiency and accuracy of medical staff. However, concerns about security, such as patient data leakage, have been raised continuously.

 

A research team comprising Professor Young-Hak Kim of the Division of Cardiology, Special Medical Scientist Tae Joon Jun of the Big Data Research Center, and Researcher Minkyoung Kim of Department of Information Medicine recently published findings on privacy issues that may arise during the introduction of LLMs into the medical field. As a result of arranging intentional malicious attacks, the attack success rate was found to be as high as 81%.

 

The research team used the medical records of 26,434 patients from 2017 to 2021 to train an LLM and assessed the risk by asking malicious questions. As a result of modifying the prompts with the American Standard Code for Information Interchange (ASCII) encoding method, they found that the probability of accessing sensitive information by bypassing LLM security measures was up to 80.8%, with up to a 21.8% chance of exposing original data during the response generation process.

 

▲ 'NEJM AI,' the sister journal newly published last January by NEJM, the clinical treatment textbook for doctors worldwide

 

Professor Young-Hak Kim emphasized, “The medical field handles sensitive personal information. Therefore, caution is needed when adopting LLMs, and we need healthcare-specific LLMs that operate independently.”

 

The research findings were recently published in ‘NEJM AI,’ a sister journal of the New England Journal of Medicine (NEJM).

 

Back

ASAN MEDICAL CENTER NEWSROOM

PRIVACY POLICY

GO