The article reviews the concept of predictable harm as a methodological tool for a comprehensive risk assessment while developing and implementing AI-powered medical devices. The study is relevant due to exponential growth of using AI-powered technologies in healthcare and lack of unified approaches to prediction of potential negative consequences of their usage. Existing regulatory approaches to risk assessment, including Russian regulatory documents and international standards, have been analyzed. A multidimensional classification of types of predictable harm is proposed considering the entire life cycle of medical AI systems. Special attention is given to ethical aspects of using artificial intelligence in medicine, including the principles of patient autonomy, equity, non-harm and transparency of algorithms. An expanded matrix for assessing predictable harm has been developed. It integrated technological, clinical and ethical parameters for each stage of development and implementation of AI systems in medical practice. The results of the study can be used as a methodological framework for developers of medical AI systems, regulatory authorities and medical organizations in assessing safety and effectiveness of introducing intelligent technologies into clinical practice.
VIEWS 240