Generative AI: We keep asking “Can it?” when we should ask “What does it normalise?”
Photo: iStock

Generative AI: We keep asking “Can it?” when we should ask “What does it normalise?”

We are obsessed with what GenAI can do because that is visible and impressive. We are far less attentive to what it normalises because that change is slow and quiet
Published on
Listen to this article

Every powerful technology invites two kinds of questions. The first kind is technical: Can it work? Is it accurate? Is it safe? The second kind is cultural: What habits does it create? What behaviours does it reward? What does it quietly make normal?

When it comes to Generative AI, we are almost entirely stuck in the first category.

The public debate is dominated by performance questions. Can GenAI write correct answers? Can it avoid bias? Can it be regulated? Can it replace jobs? These questions matter. But they assume that GenAI is just another tool—something we pick up, use, and put down.

That assumption is deeply flawed.

GenAI is not merely a tool. It is a behaviour-shaping system. And the most important question is not what it can do, but what it trains humans to do differently.

The lens we need: Incentives and normalisation

A more useful way to analyse GenAI is through the lens of incentives and normalisation.

Every system rewards some behaviours and discourages others. Over time, rewarded behaviours become habits. Habits become norms. Norms become culture. This is how technologies reshape societies without formal decisions or grand plans.

GenAI systems reward speed over reflection, fluency over depth, and completion over struggle. They normalise delegation of thinking tasks that were once considered essential for learning and professional growth. They quietly shift the definition of “good work” from well-thought-out to well-generated.

This does not mean GenAI is harmful by design. But it does mean that its impact cannot be understood by accuracy metrics alone.

What we are focusing on right now

Today’s dominant questions are defensive. We worry about hallucinations, misuse, copyright violations, and job displacement. These are understandable fears. Institutions and regulators are reacting to visible risks. The advantage of these questions is that they are measurable. You can test accuracy. You can audit bias. You can count jobs. You can write policies. The limitation is that these questions assume human behaviour remains stable. They treat GenAI as an external force acting on a fixed society. That is rarely how change happens.

The questions we are avoiding

We are not seriously asking how GenAI changes learning itself. If students grow up in an environment where first drafts, explanations, and examples are instantly available, what happens to patience, intellectual endurance, and originality? The concern is not cheating. The concern is a world where meaningful effort quietly disappears, replaced by effortless outputs.

We are not asking how GenAI reshapes authority. When answers are always available and confidently worded, who decides what is correct? Expertise risks becoming performative rather than earned. Confidence begins to look like competence.

We are also not asking how GenAI alters decision-making in organisations. When summaries replace reading and recommendations replace judgment, responsibility becomes diffused. Decisions may feel informed, but no one fully owns the reasoning behind them.

These are uncomfortable questions because they do not have clean solutions. They demand redesign, not just regulation.

The case for GenAI: Why this shift may be necessary

Supporters of GenAI argue, with some justification, that every major technology has sparked similar anxieties. Calculators did not destroy mathematics. Search engines did not end thinking. GenAI, from this view, simply moves humans up the value chain.

There is evidence that GenAI can increase productivity, lower entry barriers, and help non-experts perform complex tasks. For many users, it reduces friction and cognitive overload. Used well, it can act as a thinking partner rather than a replacement. From this perspective, resisting GenAI is not wisdom but nostalgia. The real task is adaptation. This argument deserves respect.

The counterargument: When convenience becomes dependence

However, there is a crucial difference between tools that assist thinking and systems that replace the struggle of thinking. Struggle is not a bug in learning; it is a feature. Writing a poor first draft, debugging code, or wrestling with an idea builds judgment. When GenAI removes that friction entirely, it may also remove the formation of skill. The risk is not that people become incapable overnight. The risk is gradual deskilling, where humans retain the ability to approve outputs but lose the ability to generate them independently. Over time, dependence becomes invisible. And what is invisible is rarely questioned.

A better set of questions

Instead of asking whether GenAI can do a task, we should ask whether humans should stop doing that task entirely. Instead of asking whether output is correct, we should ask whether reliance on that output changes how expertise is built. Instead of asking how to regulate GenAI tools, we should ask how to redesign education, work, and evaluation systems so that human judgment remains central. These are not anti-technology questions. They are system-design questions.

From capability to consequence

The debate around Generative AI is not missing intelligence; it is missing depth. We are obsessed with what GenAI can do because that is visible and impressive. We are far less attentive to what it normalises because that change is slow and quiet. The most important impact of GenAI will not be in spectacular failures or dramatic breakthroughs. It will be in the everyday habits it encourages, the skills it makes optional, and the values it reshapes without asking permission.

If we continue to ask only, “Can it do this?”, we will wake up one day to a world where the deeper question— “Should we still do this ourselves?”—no longer has a clear answer. That is the debate we should be having.

Sanjay Fuloria is Professor and Director, Center for Distance and Online Education (CDOE), ICFAI Foundation for Higher Education (IFHE), Hyderabad

Views expressed are the author’s own and don’t necessarily reflect those of Down To Earth

Down To Earth
www.downtoearth.org.in