This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (CC BY).
ORIGINAL RESEARCH
Framework of risk evaluation of medical AI systems
1 NA Semashko National Research Institute of Public Health, Moscow, Russia
2 Russian University of Medicine, Moscow, Russia
Correspondence should be addressed: Elena Alekseevna Volskaya
Chernyakhovsky St., 4/a, apt. 52, Moscow, 125319, Russia; ur.xednay@anele-slov
Author contribution: the authors have made an equal contribution to the research and writing of the article.
Medical technologies using artificial intelligence (AI) systems hold a firm place in real clinical practice as the main providers of important information for making medical decisions in diagnosis and treatment via assisting and auxiliary tools in the process of medical care provision. To obtain valid evidence of quality, effectiveness, and safety, AI software developers conduct clinical trials of these systems in accordance with current regulatory requirements [1], guided by the recommendations of recognized experts in the field of clinical research [2]. Ethical committees have a task to conduct a high-quality ethical review of the planned research, taking into account the specifics of AI technologies used in medicine and risks associated with their use.
Keywords: medical AI-system, ethical evaluation, risk evaluation, ethical postulates