Health

ICMR releases guidelines for artificial intelligence use in the health sector

Guiding document outlines 10 key patient-centric ethical principles for AI application in health sector 

 
By Taran Deol
Published: Monday 20 March 2023
Photo: iStock

Artificial intelligence (AI) has made inroads into every sector and healthcare is no exception. Recognising this, the Indian Council of Medical Research (ICMR) has released Ethical Guidelines for AI in Healthcare and Biomedical Research to “guide effective yet safe development, deployment and adoption of AI-based technologies”. 

Diagnosis and screening, therapeutics, preventive treatments, clinical decision-making, public health surveillance, complex data analysis, predicting disease outcomes, behavioural and mental healthcare and health management systems are among the recognised applications of AI in healthcare, ICMR noted.

Since AI cannot be held accountable for the decisions it makes, so “an ethically sound policy framework is essential to guide the AI technologies development and its application in healthcare. Further, as AI technologies get further developed and applied in clinical decision making, it is important to have processes that discuss accountability in case of errors for safeguarding and protection,” the ICMR guiding document mentioned. 

It outlined 10 key patient-centric ethical principles for AI application in the health sector for all stakeholders involved. These are accountability and liability, autonomy, data privacy, collaboration, risk minimisation and safety, accessibility and equity, optimisation of data quality, non-discrimination and fairness, validity and trustworthiness.

The autonomy principle ensures human oversight of the functioning and performance of the AI system. Before initiating any process, it is also critical to attain consent of the patient who must also be informed of the physical, psychological and social risks involved. 

The safety and risk minimisation principle is aimed at preventing “unintended or deliberate misuse”, anonymised data delinked from global technology to avoid cyber attacks, and a favourable benefit-risk assessment by an ethical committee among a host of other areas. 

The accountability and liability principle underlines the importance of regular internal and external audits to ensure optimum functioning of AI systems which must be made available to the public. The accessibility, equity and inclusiveness principle acknowledges that the deployment of AI technology assumes widespread availability of appropriate infrastructure and thus aims to bridge the digital divide. 

The guidelines also outlined a brief for relevant stakeholders including researchers, clinicians / hospitals / public health system, patients, ethics committee, government regulators and the industry. Arguing that developing AI tools for the health sector is a multi-step process involving all these stakeholders, the document noted: 

Each of these steps must follow standard practices to make the AI-based solutions technically sound, ethically justified and applicable to a large number of individuals with equity and fairness. All the stakeholders should adhere to these guiding principles to make the technology more useful and acceptable to the users and beneficiaries of the technology.

As per the guidelines, the ethical review process for AI in health came under the domain of the ethics committee which assess a host of factors including data source, quality, safety, anonymization, and/or data piracy, data selection biases, participant protection, payment of compensation, possibility of stigmatisation among others. 

The body is “responsible for assessing both the scientific rigour and ethical aspects of all health research and should ensure that the proposal is scientifically sound and weigh all potential risks and benefits for the population where the research is being carried out,” the document notes. 

Informed consent and governance of AI tools in the health sector are other critical areas highlighted in the guidelines where the latter is still in preliminary stages even in developed countries. India has a host of frameworks which marry technological advances with healthcare. 

These include the Digital Health Authority for leveraging Digital health Technologies under the National Health Policy (2017), the Digital Information Security in Healthcare Act (DISHA) 2018 and the Medical Device Rules, 2017.

Subscribe to Daily Newsletter :

Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.