Science & Technology

How will we know if AIs gain consciousness? A new checklist might help us assess

No current AI systems appears to be a strong candidate for consciousness, but developments likely in near future, says report

 
By Nandita Banerji
Published: Monday 28 August 2023
Photo: iStock __

Consciousness in artificial intelligence has long been fuel for science fiction. Now researchers have come up with a checklist derived from six neuroscience-based theories, which could aid in assessing whether an AI system is conscious.

Rapid progress in the field has raised the possibility that conscious AI systems could be built in the relatively near term, the new study said. However, large language models capable of imitating human conversation may also lead people to believe that the systems they interact with are conscious. 

Human-like behaviours can make it difficult to judge true level of engagement by AI systems. As AI systems get better at successfully mimicking human speech, users may start to think the machine is conscious.

Even well-known AI experts have made suggestions that AI networks might be able to exhibit some degree of consciousness.


Read more: AI isn’t close to becoming sentient — the real danger lies in how easily we’re prone to anthropomorphise it


Ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot ChatGPT, had last year posted on X, previously known as Twitter, that some of the most cutting-edge AI networks might be “slightly conscious”.

In May this year, a new study by Microsoft found OpenAI’s more powerful version of ChatGPT, GPT-4, can be trained to reason and use common sense like humans.

New research, published in arXiv preprint repository ahead of peer review, assessed existing AI systems in detail. The group of scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious. 

At present, no AI system appears to be a strong candidate for consciousness, the study said. 

Defining ‘consciousness’

According to the study, to say that a person, animal or AI system is conscious is to say either they are currently having a conscious experience or they are capable of having conscious experiences. 


Read more: ‘If AI goes wrong, it can go quite wrong’: Here’s ChatGPT CEO’s full testimony in US Congress


The authors also brought up why they chose “conscious” over “sentient”, which is sometimes used synonymously. “Sentient is sometimes used to mean having senses, such as vision or olfaction. However, being conscious is not the same as having senses,” the researchers said. 

The study authors used a range of neuroscience-based theories to define consciousness with the idea that if an AI system functions in a way that matches aspects of many of these theories, then there is a greater likelihood that it is conscious.

The scientific theories included recurrent processing theory, global workspace theory, computational higher-order theories and others.

The authors did not consider integrated information theory, because it is not compatible with computational functionalism, an assumption that consciousness relates to how systems process information, irrespective of components. 

The assessment of consciousness in AI is scientifically tractable, the authors said, as consciousness can be studied scientifically. Along with the checklist, which the researchers called a ‘rubric’, they also provided initial evidence that many of the indicator properties can be implemented in AI systems using current techniques. 


Read more: Why Artificial Intelligence developers should be more responsible


The scientists also considered the risks from under- and over-attribution of consciousness to AI systems and looked into the relationship between consciousness and AI capabilities.

“The main aim of this report is to propose a scientific approach to consciousness in AI and to establish what current scientific theories imply about this prospect, not to investigate its moral or social implications,” the report said.

The researchers also called for greater research on the science of consciousness and its application to AI.

“We also recommend urgent consideration of the moral and social risks of building conscious AI systems, a topic which we do not address in this report. The evidence we consider suggests that, if computational functionalism is true, conscious AI systems could realistically be built in the near term,” the paper said.

 

Read more:

Subscribe to Daily Newsletter :

Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.