The norm in global AI governance right now is performance and scale; India can rewrite that
When the Raisina Dialogue convened its “Tomorrowland” sessions this year, the conversation moved swiftly through generative AI, digital sovereignty, and the geopolitics of semiconductor supply chains. What it moved past, with equal speed, was a quieter question: what does it cost the earth to run these systems?
That cost is no longer abstract. A single query to a large language model consumes several times more energy than a standard web search. Training GPT-4, by widely cited estimates, required over 50 gigawatt-hours of electricity. Data centres globally now account for around 1.5 per cent of total electricity consumption, a share expected to double by 2030 as AI workloads multiply. Water consumption is the less-discussed twin of this crisis. Cooling a mid-sized data centre requires millions of litres of water annually. In 2023, Google reported consuming over 6 billion gallons of water to cool its data centres. In water-scarce regions, these facilities sit in direct competition with agriculture and drinking water needs.
The Dialogue’s Tomorrowland pillar positions AI as the architecture of a coming world order. That framing is fair. But any honest accounting of that architecture must include its ecological footprint. The Saṁskāra of advancement, the deep cultural imprinting of what progress means and who it serves, needs to be rewritten to include an ethical audit of resource extraction.
India is well-positioned to lead that rewriting.
The Digital Public Infrastructure that India built over the past decade offers a different model of what AI-adjacent technology looks like when designed under constraint. Aadhaar, UPI, and the ONDC framework were not built to maximise computational intensity. They were built to work at scale across bandwidth-limited, energy-uncertain conditions. The result is a stack that delivers financial inclusion, identity verification, and open commerce at a fraction of the infrastructure cost of comparable Western deployments.
UPI alone processed over 16 billion transactions in a single month in 2024, running on architecture that requires a fraction of the server infrastructure needed by equivalent payment platforms in the United States or Europe. This is not an accident of underdevelopment. It is a design philosophy: frugal by intention, interoperable by default, and public in ownership.
Frugal AI, as a concept, extends this philosophy into the AI layer. It asks: what is the minimum computational resource required to produce a socially useful outcome? It pushes back against the assumption, dominant in Silicon Valley and increasingly in Chinese and European AI labs, that model size is a proxy for model value. Mistral, the French AI company, has demonstrated that a 7-billion parameter model with careful fine-tuning outperforms much larger models on specific tasks. India’s own AI research institutions, including IIT Madras and IISc Bangalore, have produced lightweight models optimised for Indian language processing without the energy overhead of frontier models.
The technopolar world, as described by Ian Bremmer, is one where technology companies accumulate geopolitical power previously held by states. In that world, the environmental costs of AI become a governance problem, not merely a corporate responsibility problem. A hyperscaler that deploys a water-hungry data centre in a drought-prone district of Rajasthan or Karnataka is making a political choice about whose resource claims matter. The communities whose aquifers are drawn down to cool servers do not sit at the table when those investment decisions are made.
India’s regulatory response to this has been uneven. The National Data Governance Framework and the Digital Personal Data Protection Act address data flows but are largely silent on the physical resource implications of data infrastructure. There is no mandatory environmental impact assessment framework specific to AI infrastructure. No water use reporting requirement. No energy source disclosure obligation tied to AI procurement.
This gap matters because India is about to become one of the largest markets for AI deployment. The government’s IndiaAI Mission has committed Rs 10,372 crore toward building compute capacity, including a plan for shared GPU infrastructure. That infrastructure, if built to current global standards, will carry the same energy and water intensity as equivalent facilities elsewhere. If built to a different standard, one that prioritises renewable energy mandates, water recycling, and computational efficiency benchmarks, it becomes something else: a proof point that AI governance includes ecological governance.
The ethical argument here is not sentimental. It draws from the same tradition of cost-benefit analysis that governs infrastructure investment decisions. When a dam is built, its social and environmental costs are, at least formally, part of the approval process. AI infrastructure is infrastructure. It should be governed as such.
What does an ethical audit of AI resource use look like in practice? It starts with disclosure: mandatory reporting of energy consumption, water use, and carbon intensity for any AI system procured or deployed by the government. It extends to procurement standards: preference for models that meet efficiency thresholds, not just performance benchmarks. It includes siting policy: data centres built in locations with access to renewable energy and adequate water resources, not simply where land is cheap and regulatory oversight is light.
India’s DPI model already embeds a version of this logic. The stack is open, auditable, and designed for the public interest rather than for rent extraction. Extending that logic to the AI layer means insisting that AI systems procured for public use, whether for agricultural advisory services, health diagnostics, or judicial assistance, meet standards of resource frugality commensurate with the scale of their deployment.
The Tomorrowland conversations at forums like Raisina shape what the international community treats as normal in AI governance. Right now, the norm is performance and scale. India has the institutional experience, the development context, and the democratic legitimacy to argue for a different norm: one that treats ecological integrity as a condition of technological advancement, not an afterthought to it.
The Saṁskāra of progress is not fixed. It is made and remade by the choices that states, researchers, and institutions embed in the systems they build. India’s frugal AI tradition is not a limitation to be overcome. It is a governance asset to be deployed.
Sagari Gupta is a public policy researcher with over eight years of experience in social development, governance reforms, and data-driven policy analysis in India.
Views expressed are the author’s own and don’t necessarily reflect those of Down To Earth

