
The world could face existential or unacceptable risks if governments delay addressing the challenges posed by artificial intelligence (AI) while waiting for perfect and complete information, a new report has cautioned.
It has become challenging for policymakers to determine the most effective strategy given that the technology’s potential risks and future trajectory are still unclear, read the report titled Regulating Under Uncertainty: Governance Options for Generative AI.
The boom in generative AI began in 2022 when United States-based OpenAI launched ChatGPT. Generative AI uses algorithms to create new content, including audio, code, images, and text. While this breakthrough has sparked excitement, it has also raised concerns, particularly that companies may prioritise rapid advancements over safety in their race to develop sophisticated AI systems.
“Regulation is both urgently needed and unpredictable. It also may be counterproductive, if not done well,” Florence G, a visiting professor of private law at the Cyber Policy Center and the author of the report, wrote.
Governments have begun taking steps to regulate AI, adopting approaches that range from the US’ laissez-faire model, which resists government intervention in business, to China’s command-and-control model, where the government issues and enforces strict standards.
The European Union (EU) occupies a middle ground, maintaining a dialogic relationship with companies to respond incrementally to developments and identify potential harms. Other nations, including Brazil and Canada, have also initiated work on AI regulation, though the EU, China, and the US are leading the way.
Governments are facing the challenge of finding the sweet spot in AI regulation.
“If they act aggressively to mitigate all hypothetical risks, they might inhibit the development of the technology. If they act too conservatively at the outset, they might miss the chance to steer the industry toward the safe development of the technology and away from foreseeable harms,” the report read.
A major hurdle for governments is the lack of expertise compared to the private sector, which holds most of the knowledge about AI, leaving governments with little to no expertise to design and enforce a new regulatory regime.
This has prompted some governments to take laissez-faire approaches, assuming that technology companies are best positioned to anticipate and mitigate risks, and that they have a vested interest in effectively addressing these risks.
In 2023, leading companies such as Anthropic, Google, Microsoft, and OpenAI established the Frontier Model Forum, which is dedicated to developing safe and responsible advanced AI models, advancing AI safety research, identifying best practices, and collaborating with policymakers, academics, civil society, and businesses to share knowledge about trust and safety risks.
But the companies stopped short of providing any binding targets.
There are also challenges in ensuring that industries adhere to the standards and commitments, which is complicated by the lack of a dedicated independent body. “While independent watchdogs and nongovernmental organisations can monitor corporate behaviour, their efforts alone may be insufficient,” the author stressed.
The primary objective of industry players is to generate profit and capture market share, she underlined. The report also takes note of international organisations and multilateral institutions that have initiated efforts to deal with challenges and opportunities presented by generative AI.
The United Nations General Assembly has also taken steps to address AI risks. In March 2024, it adopted a resolution, focusing on international law in governing AI and urging member states and stakeholders to “refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights”.
Three months later, in July 2024, the General Assembly adopted the second resolution focusing on bridging gaps in AI development, calling on member states to institute capacity-building plans into their national AI strategies where possible.
In April 2024, the UN released the zero draft of its Global Digital Compact, which again is non-binding. It includes three commitments: Establishing an International Scientific Panel on AI, initiating an annual global dialogue on AI governance and establishing a global AI fund financed by governments and private companies to enable developing nations to benefit from advances in the technology.
As for treaties, the Council of Europe, an intergovernmental organisation created in 1949 with a human rights mandate, adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law on May 17, 2024.
The Council of Europe’s treaty represents the first-ever international legally binding treaty on artificial intelligence. Unlike the EU’s AI Act, which applies only to EU member states, this treaty has potential global reach, aiming to establish a minimum standard for protecting human rights from risks posed by AI,” the report reads.
While the Framework Convention was immediately signed by the EU, US, and United Kingdom, it is not without flaws. Its final draft allows countries to either apply the framework to all private actors or focus only on risks arising from specific activities, leading to potential inconsistencies in enforcement.