ORIGINAL RESEARCH

Ethical and cultural challenges posed by artificial intelligence (AI) in medical practice: multicultural analysis

Wiegel NL1, Mettini E2
About authors

1 Rostov State Medical University, Rostov-on-Don, Russia

2 Pirogov Russian National Research Medical University, Moscow, Russia

Correspondence should be addressed: Narine L. Wiegel
Nakhichevansky Lane, 29, Rostov-on-Don, 344022, Russia; ur.liam@aran22

About paper

Author contribution: the authors have made an equal contribution to the research work and writing of the article.

Received: 2024-06-05 Accepted: 2024-07-21 Published online: 2024-08-28
|

Artificial intelligence (AI) is a branch of computer science that aims to create machines capable of performing tasks that typically require human intelligence. Such tasks include recognizing speech, identifying patterns, and making decisions, etc. In recent years, AI has significantly revolutionized medical practice by suggesting novel capabilities to improve diagnostics, process big data, for personalized medicine and even automated surgery.

However, integration of AI in medical practice poses not only technical, but also ethical, social and cultural challenges. Studying these aspects is critical because cultural differences can significantly influence perception and acceptance of new technologies. Different cultures may have different approaches to data privacy, the role of the attending physician, and even the very concept of health and disease. This should be taken into account when developing and implementing AI systems in medicine.

The goals and objectives of the study are as follows:

The goals:

  1. To conduct a comprehensive analysis of the ethical and cultural challenges associated with integrating AI into medical practice.
  2. To assess the impact of multicultural factors on perception and use of AI in medicine.
  3. Develop recommendations for integration of AI, taking into account the cultural and ethical characteristics of different communities.

The objectives:

  1. To study current research and literature on ethics and culture of AI in medicine.
  2. Conduct surveys and interviews with medical professionals, patients and AI experts from various cultural groups.
  3. Analyze the data obtained to identify the main ethical and cultural contradictions and challenges.
  4. Compare different approaches to AI integration into medical practices in different cultures.
  5. Develop a set of recommendations for medical institutions and AI developers, taking into account multicultural factors for ethical and effective use of AI in medicine.

Relevance

Integration of artificial intelligence into medical practices results in significant changes in healthcare, including diagnosis, treatment and monitoring of patients. At the same time, it is necessary to take into account not only technological and clinical, but also cultural and ethical issues. The relevance of the study is due to the following factors:

  1. Ethical risks:
  • Ensuring patient data confidentiality.
  • Transparency and explainability of AI-made decisions.
  • Fairness and non-discriminatory use of AI.
  1. Cultural differences:
  • Perception of technology and trust in AI can vary significantly across cultures.
  • Cultural norms and traditions can influence the willingness of patients and doctors to use AI.
  1. Globalization of healthcare: the increase in migration and multicultural communities is followed by an increasing need to consider cultural factors in healthcare.
  2. Innovation and development: to make integration into medicine successful, AI accelerated development requires adaptive and universal strategies.

Novelty

The research novelty lies in development of an interdisciplinary approach to the study of AI impact on medical practices, taking into account cultural and ethical aspects:

  1. A combination of ethical and cultural analysis: conducting a comprehensive analysis combining ethical and cultural aspects, which was not previously widely covered in existing studies.
  2. Multidisciplinary approach: integration of knowledge from the fields of medicine, sociology, anthropology and artificial intelligence to obtain deeper and more informed conclusions.
  3. Focus on multiculturalism: the study of specific challenges and opportunities associated with the use of AI in multicultural settings promoting the global adaptation of technologies.

MATERIALS AND METHODS

  1. Research design

The study was based on a qualitative multicultural analysis, the purpose of which was to identify the ethical and cultural challenges associated with the introduction of artificial intelligence (AI) into medical practices. Methods of nosological analysis, as well as content analysis of literary sources and expert interviews were used.

  1. Selection

To ensure the representativeness of the multicultural aspect, the sample included medical institutions and experts from various regions and cultural contexts. Countries with different levels of economic development and cultural traditions were covered, including but not limited to the following:

  • USA
  • Europe (Germany)
  • Asia (India, Japan)
  • Latin America (Brazil)
  1. Data collection

The data was collected with the following methods:

Analysis of documentary sources. Scientific articles, government documents, as well as reports from international organizations devoted to AI in medicine were analyzed.

  1. Data analysis methods
  • Comparative analysis. Comparative analyses were conducted to identify differences and similarities in AI perception and acceptance in medicine between different cultural and geographical contexts.
  • Sociological analysis. The analysis made it possible to identify social determinants that influence AI perception and integration in medical practices, including social norms, traditions and the level of trust in technology in different countries and cultures.
  1. Limitations of the study

It is worth noting that this study is limited by time and scope of analysis. The conclusions may be specific to the selected cultural contexts and may not necessarily apply to all other situations and regions. A broader understanding requires further research covering a wider range of cultural contexts and time frames.

THE STUDY RESULTS

The use of artificial intelligence (AI) in medicine represents a great potential for improving the quality and accessibility of treatment, but at the same time it poses a number of ethical challenges to society [18]. Let’s look at the most significant ones:

  1. Confidentiality of data. AI often utilizes large amounts of medical data, including personal information of patients. The data should be protected from unauthorized access and leaks. It is important to strictly adhere to the principles of confidentiality and use advanced encryption and security methods.
  2. Compliance with information consent. It is necessary to make sure that patients are fully aware of how their data will be used and give their consent to such use.
  3. Management and control of errors. Like any technology, AI is not immune to errors that can lead to incorrect diagnosis or treatment. It is necessary to develop systems that minimize possible errors and establish clear procedures for their correction.
  4. Inequality in access to health services. There is a risk that the use of AI will increase the gap between those who can afford access to modern medical technologies and those who lack the access. It is necessary to ensure equal access to medical innovations.
  5. Transparency of algorithms. AI should be transparent in the sense that healthcare professionals and patients should understand how decisions are made. Hidden algorithms can cause mistrust and complications in relation to responsibility and ethics.
  6. The principle of justice. It is necessary to ensure that AI does not create bias against certain groups of the population. This includes monitoring the use of fair data and algorithms that do not reinforce the existing stereotypes.
  7. In case of medical errors, it should be clear who is responsible for that (AI creators, medical professionals, hospitals or someone else). Clear legislative and legal frameworks are required.
  8. Making ethical decisions. Addressing life and death is not easy. AI should be designed in such a way as to take into account not only the technical, but also the ethical aspects of its use in medicine.

The ethical challenges of using AI in medicine require extensive interdisciplinary collaboration, including medicine, law, ethics, and information technology, to ensure that benefits can be implemented without compromising the rights and well-being of patients.

The use of artificial intelligence (AI) in medicine raises a number of cultural challenges and issues that are important to consider for successful integration of technologies into medical practice [915]. Attitudes towards AI can vary significantly depending on the cultural environment, the level of trust in technology, and traditional ideas about medical ethics and practice. Let’s look at a few key aspects:

  1. Trust in technology. In some cultures, high-tech medical equipment and AI in general are taken as positive and progressive innovations. However, other communities can experience skepticism and distrust for machines making important decisions, including due to concerns about data privacy and the potential errors of machine algorithms.
  2. Privacy and privacy issues. AI systems often require large amounts of data for training and analysis, which raises concerns about security and confidentiality of personal medical information, especially in cultures with strict traditions regarding personal space and information isolation.
  3. Ethical dilemmas. AI may question traditional ethical principles of healing. In some cultures, it may be acceptable for AI to provide treatment recommendations based on statistical data, while in others it may be required that the final decision always remains with the person.
  4. Social norms and expectations. In some cultures, a doctor is considered an authority, and AI offers can be perceived as a challenge to that authority. Also, different societies may treat medical mistakes made by AI more strictly than mistakes made by humans.
  5. Accessibility and inequality. AI use can exacerbate existing inequalities in access to health services, as access to advanced technologies is often limited in regions with a limited resource capacity or remote regions. This can result in tension between groups with different levels of access to technology.
  6. Integration into traditional practices. In different cultural contexts, specific traditional practices or beliefs that must be taken into account when implementing AI can be available. Ignoring these aspects can lead to rejection of the technology by medical staff and patients.

DISCUSSION OF THE RESULTS

Taking into account cultural differences and sensitivity to these aspects is a key to creating effective, fair and ethical medical AI systems [1624]. An approach based on cooperation between engineers, doctors, ethnographers and patients can contribute to the development of AI tools that take into account the cultural context and find support in diverse societies.

In different countries, the cultural challenges of AI implementation are approached in different ways, due to both historical and modern socio-cultural factors [2530]. Let’s look at some examples.

  1. The United States of America

In the USA, great attention is paid to the confidentiality of patient data. AI in medicine should be used in accordance with the requirements of data privacy legislation such as HIPAA (Health Insurance Portability and Liability Act). AI systems that analyze medical data must strictly comply with these standards. In addition, the startup culture is highly developed. It contributes to rapid integration of new technologies in medicine.

  1. Japan

In Japan, respect for the elderly is extremely important. This is taken into consideration when AI systems are developed, especially in the field of care for the elderly. Japan is actively researching and developing AI to support the elderly, taking into account their needs and preferences. This approach makes various aspects of their lives easier.

  1. Germany

Germany has strict data protection rules applied while regulating the introduction of AI into medical practices. Ethics committees are actively involved in the process of approving the use of AI to ensure compliance with ethical standards. Germany also focuses on research on AI responsible use, emphasizing the importance of an open dialogue with society regarding these issues.

  1. India

In India, there are significant regional differences in access to medical services and AI technologies. AI developers try to make technologies accessible and understandable to a wide range of users, including those regions where medical infrastructure is not developed much. AI projects are often aimed at improving the availability of medical services.

  1. Brazil

In Brazil, special attention is given to the social responsibility of technology adoption just like in other Latin American countries. In medicine, AI is seen as a means of reducing social inequalities while getting access to healthcare. Artificial intelligence programs try to be sensitive to different cultural contexts and practices.

To ensure the ethical and culturally sensitive use of AI in medicine, it is necessary to promote interdisciplinary cooperation between specialists in the field of AI, medicine, sociology, law and ethics [3133]. The public and patients should be actively involved in the processes of development and decision making. This will help not only to promote innovative development, but also to take into account moral, ethical and cultural aspects while introducing new technologies into medicine.

CONCLUSIONS

The integration of artificial intelligence (AI) into medical practice is a complex and multifaceted process, accompanied by significant ethical and cultural challenges. During the multicultural analysis, the following key aspects were identified:

  1. Ethical Privacy and Security Concerns:

Considerable attention should be paid to maintaining the confidentiality of personal medical data. The implementation of AI systems requires compliance with strict security standards, as patient data may be vulnerable to leaks and cyber attacks. In addressing these issues, it is important to take into account not only the technical aspects of data protection, but also the ethical standards of each particular culture.

  1. Fairness and non-discrimination:

AI systems used in medical practices should be designed to strictly adhere to the principles of fairness and non-discrimination based on race, gender, social status and other factors. The introduction of algorithms that take into account multicultural diversity and characteristics of each group will contribute to a more just distribution of medical services.

  1. Cultural Adequacy:

Different cultural groups may perceive and react differently to the use of AI in medicine. It is important that developers and implementers of AI systems take into account cultural values, customs and biases that may influence the adoption of such technologies by patients of various ethnic and cultural communities.

  1. The problem of technology translation:

AI introduction in countries with different levels of technical and social development can face many barriers. In developing countries with limited technical resources, the use of AI requires a special approach focused on infrastructure accessibility and sustainability.

  1. Legal and regulatory issues:

Each country has legislative frameworks of its own. They can have a significant impact on the way AI is introduced into medical practices. Each country should develop clear regulatory mechanisms that take into account local legal and ethical standards, as well as international experience.

  1. Interaction with medical staff:

AI introduction is also changing the role of medical personnel and requires their active participation in transition to new technologies. It is important to organize training and retraining programs that will allow medical professionals to effectively interact with AI systems, while respecting traditional methods of treatment and care.

In conclusion, integration of AI into medical practices represents a unique opportunity to improve the quality and accessibility of medical care. However, successful integration of AI requires careful consideration of ethical and cultural aspects, ensuring that new technologies benefit all groups of the population and strengthen trust in the medical system among both medical staff and patients.

КОММЕНТАРИИ (0)