OPINION

Mechanisms for introduction of artificial intelligence in healthcare: new ethical challenges

About authors

1 Yaroslavl State Medical University, Yaroslavl, Russia

2 Pirogov Russian National Research Medical University, Moscow, Russia

3 Central Research Institute of Healthcare Organization and Informatization, Moscow, Russia

Correspondence should be addressed: Mikhail Yu Kotlovsky
Revolyutsionnaya str., 5, Yaroslavl, 150000, Russia; ur.liam@yiksvoltok.u.m

About paper

Author contribution: the authors have made an equal contribution to the research work and writing of the article.

Received: 2024-07-31 Accepted: 2024-08-18 Published online: 2024-09-18
|

The term artificial intelligence was first coined by John McCarthy at Dartmouth Seminar in 1956. This event lasted for 6 weeks. At this time, leading experts interested in human mind modeling (John McCarthy, Marvin Minsky, Claude Shannon, Nathaniel Rochester, Arthur Samuel, Allen Newell, Herbert Simon, Trenchard Moore, Ray Solomonoff and Oliver Selfridge) discussed the fundamental possibility of creating a thinking machine. As a result of this conference, the main guidelines of the new field of science were announced [1].

The significant events in artificial intelligence (AI) development in general and their application in medicine include Alan Turing’s article “Computing Machines and the Mind” (1950) [2], the first scientific article “Some Studies in Machine Learning Using the Game of Checkers” [3], dedicated to AI, published in Pubmed database (1964), one of the first AI-based decision support systems (MYCIN) (1976) [4], as well as permission to use these systems (IDx-DR) in clinical practice in the USA (2018) [5] and much more. However, it was not possible to avoid the waning interest in this topic caused by insufficient technology development and high expectations [6].

It is no doubt that the decision of the European regulator (CE Mark) regarding the permission to use ChestLink service in the clinic in 2022 can be considered a landmark event. The service was developed by Oxipit, the Lithuanian startup. This software product autonomously analyzes X-ray images of the chest and, in the absence of pathology, independently forms a conclusion for the patient without involving a radiologist [7]. Currently, Russia has come close to development of similar systems.

Thus, AI systems in medicine are becoming more and more autonomous. This can be a good help in solving a number of problems in modern healthcare, such as staff shortages, professional burnout and, in some cases, insufficient staff qualifications. However, this also increases the ethical requirements for reliability of such AI systems. It is of note that none of these systems can guarantee error-free operation in 100% of cases so far.

It seems doubtful that this indicator will be achieved in the future.

Russian analysts identify the following problems associated with the introduction and use of AI in the context of practical healthcare, which make the use of such systems unethical:

  • insufficient evidence of effectiveness and safety of solutions in terms of their admission for use by medical professionals;
  • increased risk of harm to the health of patients due to potential data processing errors and generated conclusions (recommendations);
  • the complexity of interpreting solutions in machine learning (the “black box” problem);
  • higher risks of self-learning algorithms that are able to change the way they are functioning due to the emergence of new clinical data, including those obtained during their operation;
  • issues of cybersecurity, including unauthorized interference with AI algorithms or access to personal data of patients;
  • the problems of data bias, which in turn lead to an asymmetry between the data on which the AI models were trained and the data they analyze when applied in real clinical practice [8].

It is worth noting that the USSR did not only keep up with Western countries in AI development, but was also ahead in a number of topics. At the same time, well-known political events have allowed our partners to get ahead in this area [9]. Currently, Russia is actively developing and increasing its efforts every year.

So, what is artificial intelligence? In the presidential decree on the development of AI in the Russian Federation, the following definition is provided: artificial intelligence is a set of technological solutions that allows simulating human cognitive functions (including self-learning and finding solutions without a predetermined algorithm) and obtaining results comparable to those of human intellectual activity while performing specific tasks [10].

The literature identifies the main characteristics that make it possible to perceive AI systems as cognitive ones. This is the ability to:

  • understand (assimilate new content and include it in the system of established views and ideas);
  • speculate (the ability to build a series of thoughts and conclusions on a specific topic and present them in a logically consistent form);
  • ensure self-learning (activities to change and adapt the behavior of the subject of education following the goals of survival, development, and improvement);
  • expand opportunities (to increase the set of tools and methods that ensure the maximum possible productivity) [11].

It is worth saying that these qualities are fully inherent in the so-called strong AI. At the same time, scientists currently differentiate between weak and strong AI. Weak AI is able to solve only simple applied tasks without human involvement, that is, those tasks for which it was directly created. Strong AI is supposed to be autonomous and has the ability to learn and speculate.

It is mentioned in the National Strategy of the Russian Federation for the Development of Artificial Intelligence that the entire fundamental scientific research is aimed towards creation of a universal (strong) AI [10].

The emergence of strong AI is currently associated with the development of generative adversarial neural networks (GANs) and large language models based hereon. Currently, there are already several dozen similar systems, including ChatGPT (Generative Pre-trained Transformer) [12]. These AI systems, in turn, are designed to solve a wide range of issues for almost any text user requests, including writing and programming code. Computer programs have become capable of writing new computer programs on their own. According to quality metrics aimed at measuring associative thinking, these models have approached the basic human level by 2022. Funding for development of these models is also increasing expansively from year to year [13].

From an ethical point of view, it is worth paying attention to the fact that in 2023, more than thousands of experts in the field of AI signed an open letter calling for a 6-month moratorium on strong AI development, to which they refer these models, in order to develop an effective risk forecasting system and ensure control over the development and consequences of introduction of intelligent technologies [14]. WHO also called for the balanced and responsible development and application of large language models [15].

As it has been mentioned already, in 2019, the Russian Federation adopted the national strategy for development of artificial intelligence until 2030. It defines the role of AI in healthcare, which is to improve the quality of services, including:

  • preventive examinations;
  • diagnostics based on image analysis;
  • forecasting the occurrence and development of diseases;
  • selection of optimal dosages of medicines;
  • reduction of pandemic threats, automation and precision of surgical interventions [10].

It is worth noting that apart from Russia, more than 60 countries have adopted similar documents in the field of AI development [16]. This is due to the awareness of the economic, political and defense advantages provided by the operation of such systems at the state level.

The above strategy states that in order to maximize the effective stimulation of the development and use of AI technologies, it is necessary to adapt regulatory regulation in terms of human interaction with AI and develop fundamental ethical standards [10].

It should be noted that in Russia, the National Committee on Ethics of Artificial Intelligence was established in 2020 under the Commission of the Russian Federation for UNESCO. This is the first such body in the world under the National Commission for UNESCO. This committee includes leading experts in the field [17].

In the international field, 193 countries concluded a global agreement on the ethics of AI within the framework of the 41st session of UNESCO (2021) [18]. Russia acted as one of the most active participating countries in the process of developing recommendations and initiated discussions on a number of key issues in this area.

It is worth noting that at the interstate level, more than a dozen international organizations are developing recommendations and standards for the use of AI in the field of medicine. However, their actions are sometimes insufficiently coordinated [19].

As an example of non-compliance with ethical principles, one can cite the study indicated in the report of British scientists on the development of AI for 2022 (State of AI). According to it, the AI system was tasked by scientists to select a chemical formula of a substance with maximum toxicity and biological activity instead of reducing the toxicity of a drug. The AI system generated the formula of VX warfare agent [20].

Thus, we can say that almost all AI systems are dual-use systems. They can be configured to bring both benefit and harm to a person. It depends on the professionalism and goodwill of their creator and operator.

In 2021, Russia itself adopted a Code of Ethics in the field of artificial intelligence. It was developed by the AI Alliance, which unites leading Russian technology companies with the participation of the Analytical Center under the Government of the Russian Federation and the Ministry of Economic Development. Currently, 303 leading Russian organizations and 41 federal authorities have joined the Code.

Its provisions apply to relations associated with ethical aspects of the life cycle of an AI system, including creation (design, construction, piloting), implementation and use of AI technologies at all stages of their life cycle.

The condition for the application of this Code provisions is that:

  • These relations are currently not regulated by the legislation of the Russian Federation and/or acts of technical regulation;
  • AI systems are used exclusively for civilian (not military) purposes [21].

This Code proclaims the following fundamental ethical principles and rules of conduct:

  1. the main priority of the development of AI technologies is to protect the interests and rights of people and the individual;
  2. it is necessary to be aware of responsibility when creating and using AI;
  3. the responsibility for the consequences of the use of AI is always borne by a person;
  4. AI technologies need to be used for their intended purpose and implemented where it will benefit people;
  5. the interests of AI technology development are above the interests of competition;
  6. maximum transparency and truthfulness in informing about the level of development of AI technologies, their capabilities and risks [21].

The provisions of this Code permeate the entire life cycle of the AI system. A typical example is the cycle of such systems in the Moscow experiment conducted on the basis of Scientific and Practical Clinical Center for Diagnostics and Telehealth Technologies of the Moscow Healthcare Department (fig. 1). This world’s largest experiment has been ongoing since 2020 and has allowed leading domestic and foreign manufacturers of medical AI systems to demonstrate their capabilities [22].

It is necessary to pay attention to the fact that the cycle should include constant training and retraining of the product. This is because the model becomes outdated from the moment it is created. At the same time, monitoring and control actions should be carried out on the constant basis.

AI-based programs in the Russian Federation can have two forms: information systems and medical devices. In the latter case, they undergo preclinical (laboratory) and clinical studies, are subject to registration by Roszdravnadzor and included in a special register of medical devices.

The software is a medical device in the presence of the following signs:

  • It is a computer program, regardless of the hardware platform used, as well as the ways to place and provide access to it;
  • It is not an integral part of another medical device; – It is intended by the manufacturer for medical care; – It converts the original information.

Clinical trials (studies) of software using AI technologies are conducted on the basis of a permit issued by the registration authority (Roszdravnadzor), as well as an opinion on the ethical validity of clinical trials issued by the Ethics Council of the Ministry of Health of the Russian Federation in the field of medical device circulation [23].

The Russian Federation has a system of ethical committees, both at the national and local levels. One of the main functions of the local ethics committees is to make sure that patients in clinical trials are fully and in an accessible form informed about the risks and benefits resulted from participation in the study and that they have provided informed consent to participate in the study or consent to use the information received when medical care was provided to them.

Thus, while conducting a clinical trial (research) of AI systems, permission from ethical committees at both levels must be obtained. At the same time, there should be a constant exchange of information with the organization on the basis of which these tests are conducted. The Ethics Committee is notified of the occurrence of serious adverse events. Clinical trials should be conducted in full compliance with ethical principles based on the Helsinki Declaration [20]. At the same time, previously obtained medical information can also be used in the clinical trial of these systems, if direct patient participation is not required (a retrospective study procedure).

Currently, methodological recommendations have been created in Russia for conducting an ethical examination of the clinical trials (research) program of the AI system for healthcare. Figure fig. 2 shows their recommended algorithm for this examination [8].

It is worth noting that apart from state control, systems of voluntary AI certification have now begun to be developed and implemented in Russia. This certification aims to assess the compliance of AI algorithms with the requirements imposed hereon. The INTELLIGOMETRICA system developed by the National Research University Higher School of Economics (no. ROSS RU.B2915.04VSE0) can serve as an example.

A number of ethical issues are raised by the process of collecting and using medical data. It is worth noting that appropriate data is needed to create and monitor AI systems. At the same time, they are diverse and have a complex structure. One of the key tasks of the developer is to ensure the integration of heterogeneous information into a structured dataset. The dataset differs from a simple set of medical data because it is endowed with special properties:

  1. it is unified and structured;
  2. there are no gross inaccuracies, erroneous values;
  3. it has additional information (categories and values of features or characteristics of data elements) [24].

In Russia, a dataset is equal to a database and, as a result of intellectual activity, can voluntarily undergo through state registration. When conducting clinical trials of an AI system, it is recommended to use only registered datasets [25].

In a foreign practice, datasets are often published not only as datasets available for download, but also as scientific publications in a journal.

Thus, creating high-quality datasets is just as difficult and demanding as writing a software code and teaching a machine learning model.

Whoever owns the data creates AI systems. This is supported by the experience of foreign giant corporations. Western companies are willing to pay billions of dollars to possess the data (IBM Watson Health).

In the Russian Federation, a significant number of open datasets with medical information available for download (mainly marked up computer images) were created by Scientific and Practical Clinical Center for Diagnostics and Telehealth Technologies of the Moscow Healthcare Department [26]. Great hopes in the accumulation of data are placed on the subsystems created within the framework of the Uniform State Health Information System.

Currently, a large number of datasets with medical information (sometimes of unknown quality) remain publicly available on foreign platforms (Kaggle, Google Dataset Search, AWS Public Datasets, etc.).

In the Russian Federation, medical data for training and testing AI systems should not contain any personal information apart from the consent of the patient to whom this data belongs (GOST R 59921.5-2022 Artificial intelligence systems in clinical medicine.

Any such information should be deleted both from metadata and source data. At the same time, it remains an open question how to maintain a balance and not violate the rights and legitimate interests of the patient, without over-regulating this industry by hampering its development. It is no secret that modern systems, including those based on AI, can restore a person’s identity based on indirect data (for example, restoring a patient’s face from the bones of the skull with subsequent identification through social networks) [27]. Therefore, the question remains open about exactly what information should be deleted and how to leave information that is at least somewhat useful for machine learning.

In order to eliminate unnecessary administrative barriers, a number of regulatory acts in the Russian Federation allows experimental legal regimes to be established for developers for a limited time, on a limited territory, and for a limited number of subjects [28].

When creating and operating AI systems in medicine, transparency of their application and the possibility of human cancellation and/or prevention of socially and legally significant decisions at any stage of their life cycle should be ensured in order to comply with international and interstate ethical regulations. Leading international organizations (UNESCO, EU, OECD) are calling for increased transparency of AI systems.

Meanwhile, transparency means as follows:

  • availability of the source code;
  • understanding by the user why the machine made this or that decision;
  • understanding by the patient of the positive and negative aspects of interaction with AI.

At the international level, the IEEE (Institute of Electrical and Electronics Engineers), a non-profit engineering association from the United States, is engaged in standardization of transparency of AI systems.

It is worth noting that currently AI systems in medicine are not endowed with legal personality. They are only doctor’s advisers and do not absolve him of legal responsibility for actions performed during the provision of medical care. The responsibility is currently borne by the attending physician. In some cases, legal responsibility may be assigned to the medical organization on the basis of which these systems are used, their developer and manufacturer [29].

The implementation of the above ethical principles is facilitated by ensuring comparability of technical parameters, unification of AI safety requirements, as well as free cross-border exchange of medical technologies and the results of their implementation in medicine, biology, and pharmacology. This is confirmed in WHO Resolution WHA71.7 dated 05/26/2018 on Digital healthcare [30]. In the Russian Federation, Technical Committee for Standardization 164 “Artificial Intelligence” (TC 164) was established to improve the efficiency of standardization in the field of development and operation of AI systems at the national and international levels. More than 120 specialized organizations take part in the activities of TC164. The subcommittee SC 01 “Artificial Intelligence in Healthcare” was formed as part of this Committee. This subcommittee works on the basis of Scientific and Practical Clinical Center for Diagnostics and Telehealth Technologies of the Moscow Healthcare Department [30]. In addition, the work of a number of other technical committees and subcommittees may be relevant to the development of fundamental state industry standards in this area including:

  • TK-MTK-22 “Information Technology” / SC 132 “Data management and data exchange”;
  • TK-MTK-22 “Information Technologies” / SC 138 “Platforms and services for distributed applications”;
  • TK-MTK-22 “Information Technologies” / SC 127 “Information Technology Security”;
  • TC 96 “Biometrics and biomonitoring”;
  • TC 164 “Artificial intelligence” / SC 02 “Data”;
  • TC 194 “Cyber-physical systems”;
  • TC 362 “Information protection”;
  • TC 459 “Information support of the product lifecycle”.

It is worth noting that the result of SC 01 alone was the entry into force of more than a dozen State industry standards, including GOST 59921 “Artificial intelligence systems in clinical medicine”, which examines the procedure for conducting technical and clinical trials of medical systems based on AI and GOST 59276 “Artificial intelligence systems. Ways to ensure trust’.

CONCLUSION

The intensive development of AI systems and their widespread introduction into the healthcare system inevitably pose many ethical questions for Russian society. The importance of ethical regulation in this segment of AI development is due to several key aspects. Firstly, this will protect the rights of patients to the greatest extent possible, which includes ensuring their informed consent, data confidentiality and security. Secondly, it is necessary to take into account justice and equality in access to AI-based medical technologies, preventing possible discrimination. In addition, it is necessary to take into account the potential risks and limitations of the use of AI in medicine (such as algorithm errors, lack of full transparency in decision-making and the need for constant monitoring and updating of systems).

Ethical standards should also include mechanisms to control the responsibility and accountability of AI developers and users. In this regard, development and implementation of AI systems in healthcare should not only be determined, but also strictly controlled by ethical principles. Taking appropriate measures at the national and international levels can significantly increase public confidence in the use of AI in medical institutions, contribute to the overall progress of the modern healthcare system and improve the quality of medical care.

КОММЕНТАРИИ (0)