lexisnexisip.com

Artificial intelligence and life sciences

Monday 29 April 2024

Juliana Sene Ikeda
Campos Thomaz Law, São Paulo
juliana@camposthomaz.com

Alan Campos Thomaz
Campos Thomaz Law, São Paulo
at@camposthomaz.com

Giovana Boesso
Campos Thomaz Law, São Paulo

 

In recent years, society has witnessed exponential development in artificial intelligence (AI), which had already been present to a lesser extent in everyday tasks through algorithms. There has been a significant growth in investment and research in the life sciences and healthcare industry.

According to the Artificial Intelligence Index Report 2023,[1] produced by Stanford University, over $189bn was invested in the development of AI around the world. Among the areas to which resources were directed, the field of medicine and healthcare stands out, receiving a total of $6.1bn in 2022, marking it as the area with the highest investment.

The use and development of AI systems in healthcare have several purposes, including expediting disease prognosis and diagnosis; assisting patients in identifying and understanding health risks; deriving insights from scientific data and research; and developing new medication and medical devices.

However, the use of AI in these fields also raises several concerns, such as liability in the case of system mistakes; algorithm biases; dissemination of false or misleading information; access to health systems by low and medium-income countries; and the high risk of data leaks. To assess the risks associated with the use of AI in the life sciences and healthcare sector, on 18 January 2024, the World Health Organization (WHO) released the report Ethics and Governance of Artificial Intelligence for Health, which is detailed below.

First, it is necessary to clarify that the development of AI systems for healthcare may or may not primarily involve generative AI, specifically the so-called 'large multimodal model' (LMM). This type of AI can integrate highly diverse datasets, accepting multiple types of inputs and generating outputs that are not limited to the type of data entered. This expands its repertoire and functionality, enabling it to identify relevant patterns as information.

In the healthcare sector, the WHO has identified the overestimation of the benefits of LMMs as a risk. Given that this technology is often seen as a solution to pressing social issues, there is a tendency for society to overestimate its results and overlook its problems. In the hope of improving health indicators, governments may adopt AI systems without adequate evidence of their safety and efficacy, which would cause significant risk to the wellbeing of patients and users.

Another risk stemming from the utilisation of AI in healthcare is the issue of accessibility to diverse populations. Generative AI remains a high-cost system for its operators,[2] thus posing a risk that low and middle-income countries may not have access to the most effective AI. Furthermore, as AI systems become more widespread and operational costs decrease, there is a concern that populations in these countries may only have access to AI systems, thereby limiting direct contact with healthcare professionals to the wealthiest individuals.

Also regarding accessibility, there is a concern related to the language used because some LLMs only understand prompts in English. This limitation makes them inaccessible to a large part of the global population that is not fluent in the language. Additionally, it makes these systems more susceptible to disseminating false or misleading information in other languages.

An additional and significant risk involves biases. Often, the datasets used to train AI models are not sufficiently broad in terms of the variety of the population or regions, leading to the discrimination of individuals based on factors such as gender, ethnicity, age and region of residence, among others, sometimes reinforcing stereotypes. These challenges are particularly pronounced in the healthcare area. Biased AI systems may lead to automated misdiagnosis and hinder people's access to quality medical care.

It's crucial to consider the influence of the data used to train AI models on disease diagnosis. The data must be representative to ensure that the AI model isn't confined solely to the demographic represented during training, and the model should be continuously trained to be accurate and timely in the output provided by the system.

The WHO is also concerned with the impact of the use of AI on the job market. Like other fields, healthcare-related jobs will also be profoundly affected by AI. There is a significant shortage of healthcare professionals worldwide and many countries may attempt to address such a shortage by relying solely on AI. This could lead to neglecting the importance of direct doctor-patient contact for establishing more accurate diagnoses. On the other hand, it might provide access to a significant amount of information and knowledge that was not available in some regions of the world.

Healthcare professionals will also need extensive training to utilise AI in their daily routines, while respecting its limitations. Similarly, AI systems will need to maintain a high level of transparency given that they are accountable both for their decisions, and informing patients and professionals of the risks they present.

However, if AI systems are operated by technicians without medical training, their performance can drastically decrease. In addition, these workers may even develop psychological problems due to the demands of reviewing and filtering content without access to medical or psychological support.

Another crucial risk involving the use of AI in healthcare is related to cybersecurity. An extremely large amount of data is used to develop AI systems, especially LMMs, and as AI systems become more common, they will be subjected to more hacker attacks. This can result in the exposure of patients' confidential and sensitive data, as well as the external manipulation of the training dataset for AI models, thus altering their performance.

Furthermore, developers are concerned about a failure known as 'prompt injection', which has not yet been resolved. Through this flaw, malicious third parties can insert incorrect data into AI systems, causing them to function inappropriately, as well as defrauding users or altering the system's functionality.

Considering the lack of extensive regulation of AI, the WHO is also concerned about the compliance of data protection laws and human rights obligations. According to WHO's report, some AI tools may violate several major data protection laws, such as the European Union's General Data Protection Regulation (GDPR), and protection from automated decision-making.

The recent AI Act approved in the EU and similar legislation in other countries may lead to a requirement to conduct risk assessments before AI systems are used in different circumstances, including in life sciences applications. Such risk assessments are intended to mitigate the risks presented above, and other risks that might affect individuals' fundamental rights and liberties. To comply with these new requirements and offer AI responsibly, organisations have been establishing AI governance programmes to analyse and monitor the use of AI systems.

On a smaller scale, the WHO's report also highlights certain social risks, such as the dominance of major technology companies in this field and the environmental impact that may arise from high-energy consumption associated with the heavy use of AI.

The risks associated with the use of AI systems call for a broad discussion and, if necessary, further consensus on the standard for acceptable AI implementation and what is offered to the public, particularly within the life sciences and healthcare industry, so that individuals can benefit from the technology while minimising the risks associated with AI systems.

 

Notes


[1] Stanford University (2023), Artificial Intelligence Index Report 2023, AI Index Steering Committee.

[2] WHO (2024), Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models.