
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (CC BY).
ORIGINAL RESEARCH
Ethics of applying LLM–models in medicine and science
1 Yaroslavl State Medical University, Yaroslavl, Russia
2 Department of Strategic Analysis in Healthcare N. A. Semashko National Research Institute of Public Health, Moscow, Russia
Correspondence should be addressed: Landush F Gabidullina
Revolyutsionnaya str., 5, Yaroslavl, 150000, Russia; ur.xednay@01hsudnal
Author contribution: Gabidullina LF — literature analysis, research planning, writing, editing; Kotlovsky MYu — data collection, analysis, and interpretation.
Rapid integration of large language models (LLM) into healthcare gives rise to acute ethical dilemmas and practical risks. The principal issue is associated with trust of medical professionals, patients and developers in the models, as well as with the potential violation of medical ethics. In the article, key challenges are analyzed including critical importance of trust (depending on LLM data quality), disturbance of informed consent and autonomy of a patient due to the lack of transparency and excessive trust in AI algorithms. Particular attention is given to the risks of confidential medical data protection, which is confirmed by non-authorized transfer of data while using generally accessible LLM. The need to develop transparent, safe and ethically regulated solutions for LLM is medicine is prioritized.
Keywords: ethical dilemmas, large language models (LLM)