OPINION

Philosophical issues of artificial intelligence and “smart” algorithms’ trust in medicine

Georgiou TS
About authors

Moscow State Regional University, Moscow, Russia

Correspondence should be addressed: Takis Sophokli Georgiou
4 Exegerseos, Perivolia, Larnaca, 7560 Cyprus; moc.liamg@s.uoigroeg.sikat

About paper

Author contribution: the article is part of the research work of the author's dissertation on the topic ‘Artificial Intelligence: Social Risks

Received: 2021-08-01 Accepted: 2021-08-20 Published online: 2021-09-30
|

In this article, the author examines the philosophical issues associated with the introduction of artificial intelligence (AI) systems in medicine. Currently, the use of AI technologies in the field of medical sciences is one of the most important trends in the world of health. ‘Smart’ AI systems are able to learn from their own experience, adapt in the environment and, according to the parameters of the assigned tasks, can make decisions that in the past belonged only to humans. These AI technologies provide an opportunity to take diagnostics, treatment and disease prevention to a higher level. Particular attention is paid to the ethics and moral obligations of AI developers and healthcare professionals in the transition to such digital medicine.

Keywords: safety, artificial intelligence, machine learning, bioethics, medicine, ‘smart’ systems, moral decisions, trust

КОММЕНТАРИИ (0)