AI can dangerously misinform patients: WHO calls for policies for its safe, ethical use in healthcare
Unsafe use of AI can lead to people losing faith in the technology, thereby undermining its benefits
Published: Wednesday 17 May 2023
The use of artificial intelligence (AI) in healthcare systems may be harmful if not monitored and regulated, the World Health Organization (WHO) flagged May 16, 2023.
The United Nations health agency recognised that AI can benefit medical professionals, patients, researchers and scientists, but noted that, like any new technology, its risks should be examined before widespread use.
The suggestion was specifically for AI-generated large language model (LLM) tools such as “ChatGPT, Bard, Bert and many others that imitate understanding, processing and producing human communication”.
WHO called for widespread adherence to key values of transparency, inclusion, public engagement, expert supervision and rigorous evaluation of such instruments in health settings.
The global health organisation listed out the main concerns:
- Data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness;
- LLMs generate responses that can appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors, especially for health-related responses;
- LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response;
- LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content
The public health experts at WHO recommended that “policymakers ensure patient safety and protection while technology firms work to commercialise LLMs”.
This will help protect and promote human well-being, human safety, and autonomy, and preserve public health, said WHO.
It also outlined a set of principles to remember when designing, developing and deploying AI in health settings: Protect autonomy; promote human well-being, human safety and the public interest; ensure transparency, explainability and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; promote AI that is responsive and sustainable.
Without such safety measures and ethical practices, the general populace may lose faith in the technology, which will ultimately undermine its long-term benefits such as improving access to health information, aiding decision-making and enhancing diagnostic capacity in under-resourced settings, the global health body added.
Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.
We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together.