Copyright: © 2025 by the authors. Licensee: Pirogov University.
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (CC BY).

OPINION

The concept of predictable harm in development of AI-powered medical devices

Begishev IR, Shutova AA
About authors

Kazan Innovative University named after VG Timiryasov, Kazan, Russia

Correspondence should be addressed: Ildar R. Begishev
42 Moskovskaya St., Kazan, Russia, 420111; ur.liam@vehsigeb

About paper

Financing: the work was carried out under the grant from the Academy of Sciences of the Republic of Tatarstan, provided to young candidates of sciences (postdoctoral fellows) so that they could defend their doctoral dissertation, perform research and labor functions in scientific and educational organizations of the Republic of Tatarstan within the framework of the Scientific and Technological Development of the Republic of Tatarstan State Program.

Acknowledgements: the authors express their sincere gratitude to the Academy of Sciences of the Republic of Tatarstan for financial support of this study based on the results of a competitive selection (grant No. 153/2024-PD dated December 16, 2024) for ‘Ensuring the technological sovereignty of the healthcare system by criminal means’ research project, as well as deep gratitude to the staff of the Scientific Research Institute of Digital Technologies and Law of Kazan Innovative University named after Timiryasov VG for valuable consultations and constructive discussion of the conceptual provisions of the study.

Author contribution: Begishev IR — research conceptualization; development of predictable harm theoretical foundations; analysis of specific risks associated with using artificial intelligence in medical devices; systematization of predictable harm typology; research of legal framework for regulating AI systems in healthcare with various jurisdictions; formulation of research conclusions; preparation of the manuscript initial version; Shutova AA — development of a methodology for predicting and preventing predictable harm; creation of a matrix for assessing predictable harm at various stages of medical AI system life cycle; developing recommendations for practical implementation of predictable harm concept; analysis of literature sources; editing and critical revision of the manuscript with introduction of valuable intellectual content; visualization of research (making tables).

Compliance with ethical standards: a meeting of the ethics committee was not required, since this study is theoretical and methodological in nature and analyzes open literature sources and regulatory legal documents, without conducting experiments involving humans or animals and without using personal patient data.

Received: 2025-05-03 Accepted: 2025-05-16 Published online: 2025-06-15
|

Integration of artificial intelligence into medical devices offers unprecedented opportunities to improve diagnostic processes, personalize therapeutic approaches, and optimize clinical solutions. However, rapid introduction of AI systems into healthcare is associated with specific risks that require systematic analysis and proactive management. In this context, the concept of predictable harm is gaining crucial significance as a methodological tool for preventive identification and minimization of potential negative consequences of using AI-powered medical devices. A critical analysis of existing regulatory approaches to assessing the risks of AI systems in healthcare, as well as integration of ethical principles into the process of forecasting and preventing possible harm is of particular importance.

The relevance of the study is determined by exponential growth of the market for AI solutions in healthcare and lack of unified approaches assessing their safety. According to the Grand View Research analytical report, the global market for artificial intelligence in medicine will reach 120.2 billion US dollars by 2028 with an annual increase of about 41.8%. It shows the scope of challenges in the field of patient safety [1].

The purpose of this study is to form a methodological framework for identifying and minimizing predictable harm when developing and implementing AI-powered medical devices. To achieve this goal, the following tasks have been set:

  1. To conceptualize the term of predictable harm in the context of medical AI technologies;
  2. To analyze the specifics of the risks associated with the use of artificial intelligence in medical devices;
  3. To investigate existing approaches to regulation of safety of AI systems in healthcare and compare them with the author’s concept;
  4. To develop a methodology for predicting and preventing potential harm when creating medical AI systems with detailed ethical elaboration.

ESSENTIAL PART

The concept of predictable harm for AI-powered medical devices is a methodological construct that integrates the principles of predictive risk analysis, proactive safety management, and iterative reassessment of the potential harmful effects of the technology. The fundamental difference of this concept from traditional approaches to risk assessment is in the shift of focus from reactive incident response to preventive forecasting of possible scenarios of adverse events caused by specific functioning of AI systems.

In the reviewed context, the terminological definition of predictable harm can be formulated as a set of potential negative effects of using medical AI systems. It is possible to identify and minimize them systematically analyzing characteristics of the technology, the context of its application and possible trajectories of evolution of the system during operation. The key attributes of this definition include predictive nature of assessment, a systematic approach to risk analysis, and consideration of the dynamic nature of AI technologies.

At the present stage, there are some regulatory approaches to risk assessment in the use of AI-powered medical devices.

Thus, Order No. 686n of the Ministry of Health of the Russian Federation dated July 7, 2020 [2] and letter No. 02I-297/20 of the Federal Service for Healthcare Supervision dated February 13, 2020 [3] provide for risk rating. According to it, all AI-powered medical devices are classified as Class III MD before their application and at the stage of state registration. This approach is aimed at centralized regulation and a priori high-risk classification of all AI systems in medicine.

The International Forum of Medical Device Regulators (IMDRF, 2014) offers a differentiated classification of potential risks of AI-powered medical devices, depending on clinical application and possible impact on the treatment process (Software as a Medical Device: Possible Framework for Risk Categorization and Corresponding Considerations) [4]. This classification takes into account both the severity of potential harm to the patient and the role the AI system plays in the clinical process.

The industry appendix to the Code of Ethics for Artificial Intelligence of Alliance for Artificial Intelligence provides details for gradation of risks depending on the severity of errors associated with the use of artificial intelligence systems and focuses on consequences of incorrect medical decisions [5].

Specific risks associated with the use of artificial intelligence in medical devices are due to a number of unique characteristics of these technologies: autonomous functioning, potential non-transparency of the decision-making process (the black box problem), capacity to self-education and adaptation, and high dependence on source data quality [6]. These features constitute a multidimensional risk profile that needs a differentiated approach to their identification and management.

The proposed concept of predictable harm is characterized by a multidimensional structure focused on the entire life-cycle of AI systems. It includes as follows:

  • proactive focus on risk identification;
  • differentiated approach to types of harm (algorithmic, data-centric, integration, etc.);
  • multilevel stratification of responsibilities of participants;
  • iterative risk assessment and adaptation to the evolution of AI systems;
  • integration of technological and clinical aspects of quality.

Table tab. 1 systematizes the types of predictable harm for medical AI systems.

Safety regulation of AI-powered medical devices is characterized by significant heterogeneity of approaches in different jurisdictions. The European Union is implementing a structured regulatory system through the Medical Devices Regulation (MDR 2017/745) [7] and the Artificial Intelligence Regulation (AI Act) [8], which classifies medical AI systems as high-risk ones and sets strict requirements for their transparency and validation.

The US Food and Drug Administration (FDA) implements an adaptive approach which is based on the Pre-Certification Program focusing on evaluation of development processes and quality culture of the developer [9]. This approach involves continuous monitoring of system performance under real operating conditions and iterative reassessment of the risk-benefit profile.

Based on the analysis of existing approaches and regulatory requirements, an integrated methodology for predicting and preventing predictable harm when developing AI-powered medical devices is proposed, which includes the following components: a multi-level risk assessment model, an inclusive validation system for heterogeneous populations of patients, mechanisms for ensuring interpretability of algorithms, an infrastructure for continuous performance monitoring, and iterative safety reassessment processes.

The proposed methodology can be implemented in practice through the matrix of predictable harm assessment presented in tab. 2.

Ethical aspects form an integral part of the predictable harm concept and are reflected at all stages of AI-powered medical device life cycle. Let’s look at the main dimensions of ethical responsibility:

  1. Patient autonomy — it is critically important to make sure that implementation of AI systems does not diminish the role of the patient in the decision-making process. The risks of excessive automatic trust of clinicians in AI advice and quality of informed consent should be taken into account.
  2. Justice means preventing algorithmic discrimination and providing access to AI technologies to various groups. It is necessary to concentrate on data-centric risks and data representativeness.
  3. Non-harming takes into account the possibility of delayed and systemic consequences associated with evolutionary changes and self-learning algorithms.
  4. Transparency and explainability means ensuring interpretability of decisions and audit opportunities for both specialists and patients; overcoming the black box effect.
  5. Mandatory ethical audit is analysis of compliance of artificial intelligence used with medical ethics, and regular revision of the risk matrix taking into account vulnerability of certain categories of patients and long-term consequences.

These provisions are shown in the matrix of foreseeable harm assessment and presented in details (see tab. 2).

The use of the matrix allows to structure the process of identifying and minimizing predictable harm at all stages of AI-powered medical device life cycle, providing an integrated approach to risk management and compliance with regulatory and ethical requirements.

While developing medical AI systems, forming a culture of transparency is crucial for effective implementation of the predictable harm concept. This aspect includes open communication regarding technological limitations, active involvement of clinical specialists at all stages of product creation, and use of the safety through design principle involving integration of safety mechanisms directly into the system architecture. In contrast to existing regulatory approaches that focus primarily on technical characteristics and preliminary risk classification, the proposed concept assumes mandatory integration of ethical audits at each stage of the life cycle of a medical AI system. This requires multidisciplinary collaboration between developers, clinicians, ethicists, and patient community representatives to prevent algorithmic discrimination, preserve patient autonomy, and maintain equitable access to the benefits of technology. The matrix of predictable harm assessment, including ethical and clinical parameters, becomes not a simple documentation tool, but a platform for continuous dialogue between all participants in the process of introducing AI into clinical practice.

SUMMARY AND CONCLUSIONS

The conducted research allows us to formulate the following main conclusions:

  1. The predictable harm concept is an effective tool for improving the safety of introducing artificial intelligence into medicine, which proactively identifies and minimizes risks.
  2. The unique risks of using artificial intelligence require a differentiated management approach where integration of ethical aspects is mandatory.
  3. According to the comparative analysis, the author’s concept complements and expands existing regulatory approaches, providing a multidimensional, continuous and ethically supported harm assessment.
  4. The prospects for further work are associated with universal methodologies and standardization of risk assessment practices for creation and application of artificial intelligence in healthcare.

Thus, it is essential to develop and implement the predictable harm concept in development and implementation of AI-powered medical devices as it can ensure an optimal balance between the innovative potential of these technologies and patient safety. A comparative analysis with existing regulatory approaches shows the advantages of the proposed concept in terms of multidimensional risk assessment and integration of ethical principles at all AI life cycle stages. The matrix of predictable harm, which includes parameters of clinical consequences and ethical assessment, allows us to proceed from formal risk management procedures to a systematic approach that takes into account both technological and humanitarian aspects of using artificial intelligence in healthcare. Promising trends of further research in this area include development of standardized risk assessment methodologies for various categories of AI systems, creation of validated ethical audit tools for medical AI solutions, and formation of unified regulatory requirements that synthesize technological standards with the principles of medical ethics and focus on long-term social consequences of introducing intelligent technologies in healthcare.

КОММЕНТАРИИ (0)