We cannot leave AI safety in hands of tech giants; governments should step in, say experts

Experts at the 1st UN council meeting on AI expressed concerns over malfunctioning AI systems and the interaction between AI and nuclear weapons
Representative photo: iStock.
Representative photo: iStock.
Published on

A small group of tech giants steering the commercialisation of artificial intelligence (AI) can’t be entrusted with the safety of a nascent technology that is prone to “chaotic or unpredictable behaviour,” said an AI company’s co-founder at the first United Nations Security Council meeting on AI held on July 19, 2023.

The international community must work on developing ways to test AI systems’ capabilities, misuses and potential safety flaws, said Jack Clark, co-founder of the AI company Anthropic.

“We should think very carefully about how to ensure developers of these systems are accountable so that they build and deploy safe and reliable systems which do not compromise global security,” he said.

Clark urged governments to come together and make the development of powerful AI systems an inclusive endeavour rather than one dictated solely by a small number of firms competing with one another in the marketplace.

“We cannot leave the development of artificial intelligence solely to private sector actors,” he emphasised.

AI tools run risks at multiple levels. The systems that can help in better understanding biology may also be used to develop biological weapons. Once these systems are fully developed, people may identify new uses not anticipated by their developers, or the system itself could later exhibit chaotic or unpredictable behaviour, Clark added.

While Clark pointed out the absence of standards or best practices on how to test these systems for things such as discrimination, misuse or safety, UN Secretary-General Antonio Guterres said the UN is “the ideal place” to adopt such standards.

Guterres also underscored how AI has been amplifying bias, reinforcing discrimination and enabling new levels of authoritarian surveillance. “Without action to address these risks, we are derelict in our responsibilities to present and future generations. The advent of generative AI could be a defining moment for disinformation and hate speech”, he observed

Guterres also expressed concern over malfunctioning AI systems and the interaction between AI and nuclear weapons, biotechnology, neurotechnology and robotics.

Takei Shunsuke, Japan’s State Minister of Foreign Affairs, highlighted the importance of human-centric and trustworthy AI. The development of AI should be consistent with democratic values and basic human rights, he added.

Read more:

Related Stories

No stories found.
Down To Earth
www.downtoearth.org.in