The new form of thinking is called “System 0”. iStock
Science & Technology

AI creating new way of thinking, overdependence could affect human critical thinking: Scientists warn

Digital literacy and critical thinking skills should be promoted to help people operate in AI-mediated information environments, experts propose

Rohini Krishnamurthy

Artificial intelligence (AI) is paving the way for a new thinking system, which could affect our evolution and potentially put human critical thinking at risk due to overdependence, warned a new article published in the journal Nature Human Behaviour.

The new form of thinking is called “System 0”. It represents the outsourcing of certain cognitive tasks to AI, which can process vast amounts of data and perform complex computations beyond human capabilities.

“The risk is relying too much on System 0 without exercising critical thinking. If we passively accept the solutions offered by AI, we might lose our ability to think autonomously and develop innovative ideas. In an increasingly automated world, it is crucial that humans continue to question and challenge the results generated by AI,” the experts said in a statement.

Generative AI systems (like ChatGPT and Bard) can create images, audio, video and other content when they receive prompts from users. As of early 2023, some emerging generative AI systems had reached more than 100 million users, according to the US Government Accountability Office. “The rapid integration of these AI tools into our daily lives is reshaping how we think and make decisions," the paper read.

System 0 operates with two models of human thought: System 1, which describes fast, intuitive thinking and system 2, describes slow and analytical thinking. Daniel Kahneman, a psychologist who won a Nobel Prize in Economics in 2002 for his work on human decision making, came up with systems 1 and 2.

The article explained that the authors deliberately chose the term system 0 to emphasise its foundational and pervasive role in modern cognition. “Unlike system 1 and system 2 (which operate within the individual mind), system 0 forms an artificial, non-biological underlying layer of distributed intelligence that interacts with and augments both intuitive and analytical thinking processes,” the paper stated.

System 0 qualifies as an independent thinking system as it meets all cognitive extension criteria such as information flow, reliability, durability, trust, procedural transparency, informational transparency and individualisation with varying levels of satisfaction. 

Unlike system 0, system 1 and system 2 lack inherent meaning-making capabilities. Although system 0 can process and manipulate data with remarkable efficiency, it may not truly understand the information it handles.

To get meaningful outputs from AI, humans will have to interpret the data and use system 1 and system 2. The researchers also acknowledge that system 0 can offer benefits to humans.

“Transparency, accountability and digital literacy are key elements to enable people to critically interact with AI,” the experts warned. “Educating the public on how to navigate this new cognitive environment will be crucial to avoid the risks of excessive dependence on these systems, “they added.

Going forward, the researchers proposed, frameworks should be developed for evaluating the reliability, transparency and potential biases of AI systems that comprise system 0 and to establish guidelines for the responsible and ethical use of AI in decision-making processes.

They also called for the promotion of digital literacy and critical thinking skills to help people operate in the AI-mediated information environments and to encourage interdisciplinary research on the cognitive, psychological and social effects of human–AI integration.