Science & Technology

Why Artificial Intelligence developers should be more responsible

If we’re not careful about possible mishaps, all the excitement surrounding AI could end up in a dystopian world

By Akshit Sangomla
Published: Wednesday 06 February 2019

When people hear the word ‘Artificial Intelligence’ (AI), the first thoughts and images that come to mind are those fed to us by Hollywood movies of the past few decades and recently by Netflix web series such as Black Mirror.

This fictional representation paints a dire picture of AI destroying or enslaving humanity, but this vision of AI is too farfetched and distorts what is really happening in the field.

At a recently concluded conference and exhibition on the subject conducted by the Confederation of Indian Industries (CII), AI was talked about as more of a tool and the terms used were Augmented Intelligence and Machine Learning rather than AI.

CII has initiated three outcome-based AI task forces on skill development, education and agriculture. Many pilot projects by these task forces are currently being conducted at various sites.

Top companies like Microsoft and IBM, along with a litany of start-ups, are trying to solve a range of business and technology problems using AI in fields like agriculture, health, education and waste management.

For instance, Microsoft India has teamed up with the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) to develop an AI-based sowing app that crunches data, both real-time and historical (30 years), on precipitation and soil moisture to tell farmers when to sow their crops and the depth at which sowing should be done.

The farmers involved in a pilot project in Andhra Pradesh (AP) in 2016 had even reported increase in crop production of up to 30 per cent, according to Microsoft. These farmers were told by the AI to sow their groundnut crop after the third week of June instead of the first week, when the sowing is traditionally done. Along with this the farmers also received other advisories like the amount of fertiliser and manure to be used and measures for the treatment of seeds.

“I have three acres of land and sowed groundnut based on the sowing recommendations provided. My crops were harvested on October 28, 2018, and the yield was about 1.35 tonne per hectare.  Advisories provided for land preparation, sowing, and need-based plant protection proved to be very useful to me,” said Chinnavenkateswarlu, one of the 175 farmers involved with the project, which later roped in around 3,000 farmers from AP and Karnataka.

AI was able to achieve this as it can compute and analyse large amount of data in a matter of minutes and make intelligent predictions based on the analysis. At current technology, AI has more to do with assisting humans rather than annihilating them, though some weird incidents have indeed occurred and scientists have warned of disastrous consequences in the future.

In July 2017, media from around the world reported about two AI bots, developed by a research team at Facebook, inventing their own language which programmers engaging with the bots could not decipher. According to these reports the programmers panicked and shut down the programmes. But in later reports it was clarified that the truth was a little more mundane, at least as per Facebook’s research team.

The bots had indeed developed a short-hand derived from the English language to complete a task of negotiating with each other to divide a group of objects. But they did not do this on their own.

It was a programming mistake on part of the programmers as they forgot to put in a caveat that the bots should converse in a language comprehensible to human beings. Facebook claimed that they shut down the programmes as their objective of two negotiating AIs in human language was not met.

But the future of AI is highly uncertain and real concerns do remain. In fact, back in 2017, physicist Stephen Hawking had declared that AI could be the ‘worst event in the history of our civilization'. He urged the creators of AI to be careful and ‘employ best practices and effective management’.

Elon Musk, the owner of companies such as Tesla and SpaceX is another prominent proponent of responsible development and usage of AI. He had said AI is more dangerous than nuclear bombs.

The first major concern with AI in the near future is regarding the loss of jobs, which is valid. Many industry experts at the conference believed that a large number of low level jobs (with 91 per cent possibility) will indeed be lost, but new ones will also be generated, the nature of which we are still unsure about.

The second concern is to do with the possibility of mass surveillance by governments and corporations using data generated by AI platforms to induce behavioral changes among the public to make more profits — the process of creating perfect consumers.

This concern could be tackled with better regulation and legislation which is still to be debated and decided upon. Ethics surrounding AI, especially regarding historical human biases creeping into their predictions, makes up the third concern.

The bias could be removed by feeding the AI models with all the irrelevant parameter data along with the relevant ones so that human bias can be removed or corrected which needs to be ensured.

IBM’s AI OpenScale is a platform specifically aimed at removing human biases by making AI more democratic and increasing the transparency of how it works. If we are not careful about these concerns all the excitement surrounding AI could end up in a dystopian world.

Subscribe to Daily Newsletter :
Related Stories

Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.