India, China, the European Commission and over 50 countries have signed the statement on “Inclusive and Sustainable Artificial Intelligence (AI) for People and the Planet” at the AI Action Summit that concluded in Paris on February 11, 2025. Many experts have raised concerns over deeper divisions between countries.
While the United States and the United Kingdom have not signed the statement, the signatories agreed to develop sustainable and inclusive AI and agreed on the need for narrowing the inequalities and assisting developing countries in artificial intelligence capacity-building. In his speech, US Vice-President JD Vance highlighted that excessive regulation of AI could kill the transformative sector and that AI should be free from ideological bias.
“The AI Summit ends in rupture. AI accelerationists want pure expansion — more capital, energy, private infrastructure, no guardrails. Public interest camp supports labour, sustainability, shared data, safety and oversight. The gap never looked wider. AI is in its empire era,“ Kate Crawford, research professor at the University of Southern California in Los Angeles, wrote on microblogging platform X (formerly Twitter). “So it goes deeper than just the US and UK not signing the agreement. There are deep ideological divides and multiple fractures,” the expert added.
Further, India, Kenya, Germany, Chile, Finland, Slovenia, France, Nigeria, Morocco have launched a Public Interest AI Platform and Incubator to decrease division between existing public and private initiatives on Public Interest AI and address digital divides. It will aim to co-create a trustworthy AI ecosystem advancing the public interest by supporting technical assistance and capacity building projects in data, model development, transparency, audit, compute, talent, financing and collaboration.
The AI action summit held on February 10-11, 2025 was focused on five themes: Public Interest AI (define, build and deliver critical open AI infrastructure for global AI sector for beneficial social, economic and environmental outcomes for public good), Future of Work (promote socially responsible use of artificial intelligence through sustained social dialogue), Innovation and culture (building sustainable innovative ecosystems that work with all economic sectors, specially creative and cultural industries), Trust in AI (consolidate mechanisms to build trust in AI based on a scientific consensus on safety and security issues) and global AI governance (shape an inclusive and effective framework of international governance on AI).
Though the statement mentioned that countries would address the risks of AI to information integrity and continue the work on AI transparency, it was a departure from the Bletchley Declaration. The Bletchley Declaration establishes a shared understanding of the opportunities and risks posed by frontier AI. It was signed by 28 countries and the European Union at the AI Safety Summit in Bletchley Park, UK on November 1, 2023.
In the following year, the summit was held in Seoul, South Korea. The Seoul Declaration, signed by 10 countries and the EU in 2024, confirmed a shared understanding of the opportunities and risks posed by AI.
The sidelining of AI safety was palpable during the summit. Vance said: “I am not here to talk about AI safety, which was the title of the conference a couple of years ago, I am here to talk to about AI technology”.
In his newsletter Transformer, journalist Shakeel Haseem wrote that corporate executives took centre stage while the International Science of AI Safety report, which is meant to be the AI field’s equivalent of the Intergovernmental Panel on Climate Change’s climate assessments, held a side event to discuss the report.
The Report by 100 AI experts is the world’s first comprehensive synthesis of current literature of the risks and capabilities of advanced AI systems. This summary of scientific evidence on the safety of general-purpose AI could help create a shared international understanding of risks and how they can be mitigated. The report delivers on the mandate agreed at the Bletchley AI Safety Summit, where 30 nations agreed to build a shared scientific and evidenceD-based understanding of frontier AI risks through the development of an international, independent and inclusive report on the risks and capabilities of frontier AI.
Europe, too, made it clear that it is keen on being a part of the race, which already has the US and China taking the lead. “We want Europe to be one of the leading AI continents. And this means embracing a way of life where AI is everywhere. AI can help us boost our competitiveness, protect our security, shore up public health, and make access to knowledge and information more democratic. And this is what you — entrepreneurs and researchers, investors and business leaders — are showcasing here in Paris. This is a glimpse of the AI continent we want to become,” said Ursula von der Leyen, president of the European Commission.
The first rules of EU’s AI Act — European regulation on AI — have started to take effect on February 2, 2025. In 2024, Mark Zuckerberg, founder and chief executive of Meta, and 100 others, including researches, wrote an open letter, stating that Europe has become less competitive and less innovative compared to other regions and it now risks falling further behind in the AI era due to inconsistent regulatory decision making.
For the first time, countries also discussed energy needs for AI. According to the statement, countries will promote an international discussion on AI and environment and welcomed an observatory on the energy impact of AI with the International Energy Agency (IEA) to showcase energy-friendly AI innovation.
IEA will release the Observatory on Energy, AI and Data Centres on April 10, 2025. It will gather the most comprehensive and recent data worldwide on AI’s electricity needs, in addition to tracking cutting-edge AI applications across the energy sector.
Narendra Modi, who co-chaired the summit, called for green power to fuel AI’s future. “Sustainable AI does not only mean using clean energy. AI models must also be efficient and sustainable in size, data needs and resource requirements.”
UN Secretary-General Antonio Guterres also addressed the summit, noting that AI data centres already place “an unsustainable strain” on our planet. Data centres are facilities that store a network of computing systems and infrastructure to support the high-demanding needs of AI.
“As we move from text to video to image, these AI models are growing larger and larger, and so is their energy impact,” Vijay Gadepally, a senior scientist and principal investigator at MIT Lincoln Laboratory, said in a statement. “This is going to grow into a pretty sizeable amount of energy use and a growing contributor to emissions across the world,” the expert added.
“It is crucial to design AI algorithms and infrastructures that consume less energy and integrate AI into smart grids to optimise power use,” Guterres stressed. “From data centres to training models, AI must run on sustainable energy so that it fuels a more sustainable future.”
IEA estimates that a single ChatGPT query requires 2.9 watt-hours of electricity, compared with 0.3 watt-hours for a Google search — nearly 10 times higher. Data centre power demand will grow 160 per cent by 2030, according to estimates from Goldman Sachs, a global consulting firm. The expected rise of carbon dioxide emissions will represent a “social cost” of $125-140 billion (at present value), it added.
Globally, data centres consume 1-2 per cent of overall power, but this percentage will likely rise to 3-4 per cent by the end of the decade, globally, according to Goldman Sachs. But when costs related to delivering AI to consumers are considered, the figure could potentially reach 21 per cent by 2030, according to Gadepally.
Data centres account for around 2-4 per cent of total electricity consumption today in the US, Europe and China. In the US, the demand for power will rise roughly 2.4 per cent between 2022 and 2030. Around 0.9 percent points of that figure will be tied to data centres.
The investment bank Barclays estimated that data centres account for 3.5 per cent of US electricity consumption today, which could 5.5 per cent in 2027 and more than 9 per cent by 2030. In Europe, the power demand could grow by 50 per cent between 2023 and 2033, thanks to both the expansion of data centres and an acceleration of electrification. By 2030, the power needs of data centres will match the current total consumption of Portugal, Greece, and the Netherlands combined.
Power demand for data centres in the US is expected to reach 606 terawatt-hours (TWh) by 2030, up from 147 TWh in 2023 — amounting to 11.7 per cent of the total power demand of the US, projected global consultancy McKinsey & Company.
AI’s energy demand is already being felt. According to IEA, data centres consumed 1.65 billion gigajoules of electricity in 2022 — about 2 per cent of global demand. By 2026, their total electricity consumption by AI could reach more than 1,000 TWh, which is roughly equivalent to the electricity consumption of Japan.
Further, investment in new data centres has surged since 2022, driven by growing digitalisation and the uptake of AI. The average data centres have energy demand of 5-10 megawatts (MW), but large hyperscale ones, which are increasingly common, have power demands of 100 MW or more. The annual electricity consumption is equivalent to the electricity demand from around 350,000 to 400,000 electric cars.