OPINION

Philosophical issues of artificial intelligence and “smart” algorithms’ trust in medicine

Georgiou TS
About authors

Moscow State Regional University, Moscow, Russia

Correspondence should be addressed: Takis Sophokli Georgiou
4 Exegerseos, Perivolia, Larnaca, 7560 Cyprus; moc.liamg@s.uoigroeg.sikat

About paper

Author contribution: the article is part of the research work of the author's dissertation on the topic ‘Artificial Intelligence: Social Risks

Received: 2021-08-01 Accepted: 2021-08-20 Published online: 2021-09-30
|

In modern society, artificial intelligence (AI) as a phenomenon covers the entire human life. It is associated with the fact that AI is an essential component of products and devices used in the daily life of humans. AI systems become an environment and participants of human social interactions [1].

Introduction of AI systems into medicine is one of the most important modern trends of world healthcare. AI technologies significantly alter the world healthcare system, form a revolutionary system of medical diagnostics, develop new medicinal preparations and generally improve the quality of healthcare services.

The term ‘AI’ is differently defined in literature, scientific magazines and on the Internet. According to many various scientists [2], AI is a group of rationally logical and formal rules developed and coded by humans. These rules simulate intellectual structures, reproduce goal-oriented rational actions with a subsequent coding and taking instrumental decisions without a preliminary set algorithm. This means that intellectual systems, called ‘smart’ systems in the market, can act in an autonomous way.

AI differs from traditional computer algorithms as it is capable of self-education based on the accumulated experience. Due to its unique function, AI can act differently in the same situations depending on the early experience.

Smart systems equipped with AI have such features as ‘intellectuality’, ‘logic’, ‘rationality’, ‘capability to think as a human’ under all or certain circumstances.

The term AI defined in such a way provokes disputes among scientists. Some say that it is impossible to simulate all functions of the human brain, others believe that AI can even surpass the human intellect. Thus, there are two types of AI: narrow and strong or general [3].

Strong AI is an ability of the intellectual system to think and be aware of itself as a separate personality and comprehend its own thoughts, in particular. The intellectual process is similar to human one.

Narrow AI means all the systems used to solve intellectual tasks aiding a human to achieve the set goals. Human intervention is necessary here and these applications became part of our lives as we are surrounded by equipment with weak AI and are their constant users. These AI technologies generate strong interest in medicine as well. They are widely applied in ‘smart’ healthcare systems.

The government of the Russian Federation is also interested in AI. Thus, according to the Decree of the President ‘On the Development of Artificial Intelligence in the Russian Federation’, ‘… creation of universal (strong) artificial intelligence capable of solving different tasks, think, interact and adapt to the changing conditions is a complex scientific and technical issue. Its solution can be found at the junction of natural science, technical and socio- humanitarian scientific knowledge. When the issue is solved, it can result both in positive changes in key life spheres, and in negative consequences caused by social and technological changes which are concomitant with the development of AI technologies’ [4]. In accordance with the Decree of the President mentioned above, use of AI technologies in the social sphere improves the quality of life due to the better healthcare services [4].

In modern medicine, AI is one of the most important constituents of medical activity. Intelligent analysis, expert systems, neural networks, evolutionary algorithms, and biocomputing are utilized to achieve the objectivesof modern medicine.

Healthcare has big hopes for AI. AI can make healthcare more effective and convenient for patients, speed up diagnostics and decrease the number of mistakes in diagnostics, help patients deal with their symptoms or cope with a chronic disease and avoid human prejudices and mistakes.

However, the use of such tools in healthcare requires to accumulate and analyze a vast amount of biological data from millions of patients, and compare them with clinical data. Expert systems are used to cope with non-formalized issues, and include tasks in medicine. Most importantly, no unique algorithmic solution of the tasks is available.

IBM Medical Sieve, IBM Watson supercomputer, AI-RAD Companion Brain MR for Morphometry Analysis, AI-RAD Companion Prostate MR for Biopsy Support, Russian Celsus Service program based on AI technology, World Well-B EING PROJECT, etc. belong to currently developed ambitious projects.

They aim at development of ‘smart assistants’ with multi- level analytical abilities. The assistants can have access to knowledge accumulated in clinical practice and can reason in such way to facilitate taking decisions in various areas (specialties) of medicine.

It is natural that use of big data and AI technologies rises diagnostics, treatment and system of disease prevention to a new level. However, such a use of AI in medicine, which requires trust in ‘smart systems’, raises serious philosophical issues. In transition to digital medicine, the issues of ethics became crucial. They determine the speed of technological progress in this sphere [5]. Thus, there is much concern about the extent of using health data to teach AI and extent to which the ‘smart systems’ can be trusted.

Self-learning is the basic feature of ‘smart’ systems. However, it must be noted that there are serious risks associated with a correct self-learning and the problem of its border determination. Overfitting is the main threat for effective treatment. It is of note that ‘smart’ systems are not smart by themselves. They are based on various AI technologies and depend on the set tasks for the purpose of effective implementation of required functions based on the developed algorithms. These technologies are developed by AI specialists.

That is why the Nuffield Council [6], which is an independent organization estimating a set of ethical issues in the area of biology and medicine, stresses the issues that must be taken into account in the first place:

  • Who is responsible for the decisions taken by AI systems?
  • Will the progressive use of AI result in the loss of contact with people while rendering medical aid?
  • What happens when AI systems are broken?
  • How can we trust the systems that can be uncontrolled at any time?

Moreover, the Nuffield Council doesn’t exclude that AI can take an erroneous decision. And there is a question who is responsible for the decisions taken by AI [6].

The machine- learning algorithms are not transparent. They don’t give people a chance to understand why AI makes some associations or conclusions and it is unknown when the system is down.

Neither science, nor medicine has absolute knowledge. However, there are different approaches to the object of cognition, different research results and outcomes. This results in a problem with the data used to educate AI. Moreover, trusting ‘smart’ system requires data protection, especially when it is about confidential data.

In 1931, a known Austrian mathematician Kurt Gödel has found out that any formal system, including mathematics, is incomplete and controversial. In other words, it means there are problems, assertions and issues that can’t be solved, proved or contradicted while staying within the boundaries [7]. It is well known that mathematics forms the basis for AI algorithms. If everything is determined, a machine can’t freely solve new and undetermined tasks. And if the system is able to solve tasks on its own, there are certain cases when unpredictable reactions can occur. In both cases, significant problems may arise. This is a serious challenge for medicine.

Thus, any ‘smart’ system can’t take decisions independently without involvement of humans, especially when it is about moral acts [8], especially for medical purposes.

Some scientists believe that machine learning is an excellent instrument for AI agents. But it’s difficult to explain how all this works and makes algorithms mysterious even for its creators. This limits the ability of people to understand this technology and undermines trust in AI technology and systems. Trust is of critical importance in all relations and is the precondition for approval in the society. Scientists and developers share an opinion about significant risks when AI is developed without human observation and surveillance. Trust and control are two basic aspects to construct safe and reliable AI [9].

People trust in ‘smart’ systems the most important things such as money, health and safety. It means that we don’t just use the technologies, we depend on them. This is how we become vulnerable.

In this context, AI ethics determines moral obligations and duties of developers and users such as medical workers dealing with AI. AI bioethics concerns ethical issues with problems that may arise in designing and development of AI in medicine. Thus, we can say that there exist important aspects that endanger both medicine and entire society if these aspects are not agreed upon with bioethics [10, 11].

Gartner is the most well-known research and consulting company. It focuses on the markets of information technologies and often published tendencies in the area of technologies. Company specialists believe that almost all technologies that will produce a significant effect on business, people and society in the nearest decade are associated with AI [12]. There is no doubt that implementation of AI is a one-way road. The process will go on and embrace all parameters of personal, professional and social human activity. This is also true about medicine.

There is no doubt that AI technologies in medicine can be used for the benefit of humans. But absolute trust in the ‘smart’ algorithms can have devastating impacts. Thus, immense responsibility is imposed on medical scientists as they need to provide safety for humans and society considering obvious and latent risks, moral standards and principles of bioethics.

КОММЕНТАРИИ (0)