Science & Technology

WHO releases guidelines for multi-modal generative AI in healthcare, resonates with recommendations for other sectors

Rapid adoption of AI-driven models underscores need to weigh out their benefits and risks carefully  

 
By Preetha Banerjee
Published: Friday 19 January 2024
Photo: iStock

The World Health Organization (WHO) has released comprehensive guidance on the ethical use and governance of large multi-modal models (LMM) in healthcare. This fast-growing generative artificial intelligence (AI) technology, capable of processing diverse data inputs like text, videos and images, is revolutionising healthcare delivery and medical research.

LMMs, known for their ability to mimic human communication and perform tasks without explicit programming, have been adopted more rapidly than any other consumer technology in history, the United Nations health agency noted. Platforms like ChatGPT, Bard and Bert have become household names since their introduction only last year. 

But this rapid adoption underscores the need to weigh out their benefits and risks carefully.

Jeremy Farrar, WHO chief scientist, emphasised the importance of transparent information and policies for managing the design, development and use of LMMs. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.”

The guiding document identified five broad applications of LMMs in healthcare: Diagnosis and clinical care, such as responding to patients' written queries; patient-guided use for investigating symptoms and treatments; clerical and administrative tasks in electronic health records; medical and nursing education with simulated patient encounters; and scientific research and drug development.

Despite these promising uses, LMMs pose risks, including the generation of false, inaccurate or biased statements, which could misguide health decisions, according to WHO. In addition, the data used to train these models can suffer from quality or bias issues, potentially perpetuating disparities based on race, ethnicity, sex, gender identity or age, the public health experts noted. 

There are broader concerns as well, such as the accessibility and affordability of LMMs, and the risk of 'automation bias' in healthcare, leading professionals and patients to overlook errors, according to the organisation. Cybersecurity is another critical issue, given the sensitivity of patient information and the reliance on the trustworthiness of these algorithms, it added.

WHO called for a collaborative approach involving governments, technology companies, healthcare providers, patients and civil society, in all stages of LMM development and deployment.

Dr Alain Labrique, WHO director for digital health and innovation, stressed on the need for global cooperative leadership to regulate AI technologies effectively. “Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs.”

Key recommendations for governments include:

  • Investing in public infrastructure, like computing power and public datasets, that adhere to ethical principles
  • Using laws and regulations to ensure LMMs meet ethical obligations and human rights standards
  • Assigning regulatory agencies to assess and approve LMMs for healthcare use
  • Introducing mandatory post-release audits and impact assessments

For developers, WHO advises engaging a wide range of stakeholders, including potential users and healthcare professionals, from the early stages of AI development. It also recommends designing LMMs for well-defined tasks with necessary accuracy and understanding potential secondary outcomes.

In conclusion, WHO's new guidance offers a roadmap for harnessing the power of LMMs in healthcare while navigating their complexities and ethical considerations. This initiative marks a significant step towards ensuring that AI technologies serve the public interest, particularly in the health sector.

In May 2023, the health organisation had highlighted the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing and deploying AI for health. 

The six core principles identified by WHO are: (1) protect autonomy; (2) promote human well-being, human safety, and the public interest; (3) ensure transparency, explainability, and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; (6) promote AI that is responsive and sustainable.

In the document released last year, WHO listed out concerns that called for rigorous oversight needed for the technologies to be used in safe, effective and ethical ways. These included:  

  1. The data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness;
  2. Large language models (LLM) generate responses that can appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors, especially for health-related responses;
  3. LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response; 
  4. LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content; and
  5. Policy-makers should ensure patient safety and protection while technology firms work to commercialise LLMs.

The 2024 World Economic Situation and Prospects report, highlighted that although AI will transform the labour market and enhance productivity like other significant technological advancements in the past, the impact may not be evenly distributed. This may widen disparities, the experts pointed out. 

Following the launch of technologies like ChatGPT, there's been a marked shift from early adopters to mass market adoption, the report underlined. “Around a third of surveyed firms globally were using generative AI regularly within six months of its introduction, and about 40 per cent planned to expand their AI investment​​.”

AI could exacerbate inequalities within and between countries, according to the authors of the report. It might reduce demand for low-skilled workers and negatively impact disadvantaged groups and lower-income countries reliant on low-skill-intensive economic activities. This is particularly true for generative AI, which can automate some high-skilled tasks, raising concerns about job displacement in clerical, technical and service jobs​​.

Further, AI could disproportionately affect women, potentially widening gender employment and wage gaps, the UN report warned. Women are often overrepresented in roles with higher automation risks but also more frequently employed in jobs requiring interpersonal skills that are hard to automate​​, the authors added.

Workers in low-income developing countries face a dual challenge. They are less likely to be affected by automation due to fewer AI-enabled jobs but also less likely to benefit from AI-driven productivity gains.

Infrastructure gaps, such as access to digital education and the internet, limit these countries' ability to fully leverage AI advancements, potentially aggravating productivity and income disparities​​.

More worryingly, the World Economic Forum’s (WEF) Global Risks Report for 2024 had also identified AI-generated disinformation and misinformation, especially through the manipulation of media content and the creation of deep fakes, as one of the most significant global risks over the next two years, especially in the context of major upcoming elections. 

They are considered the biggest short-term risks, surpassing concerns over a persistent cost-of-living crisis and societal polarisation, WEF noted. 

The widespread use of such tools may undermine the legitimacy of newly elected governments, it added. Around three billion people are expected to vote in national elections across various countries over the next 24 months, making this issue particularly pressing. 

The European Union passed the AI act in December, aiming to ensure that AI systems used in the EU are safe and respect fundamental rights and EU values. This is particularly relevant to the EU elections, which are considered uniquely vulnerable to attacks due to the collective voting of the 27 EU-nations.

Alongside AI, the report also categorizes quantum computing as a potential disruptor, highlighting security concerns such as “harvest attacks”, where criminals collect encrypted data for future decryption with advanced quantum computers.

Subscribe to Daily Newsletter :

Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.