Science & Technology

Big Brother is watching you; actually your face

Facial recognition has become a frontline policing tool in India amid fears that it is prone to errors and allows the government to expand surveillance without much oversight 

 
By Akshit Sangomla
Published: Monday 16 November 2020
Digital illustrations: Ritika Bohra

Have you ever wondered how you recognise a face? Intuitively, right! We often do not give much attention to this special, though not exclusive, ability of humans.

But whether we think about it or not, each of us has a ‘facial vocabulary’ that enables us to recognise at least 5,000 faces, their peculiarities and profiles.

This vocabulary is organised in such a way that we instantly create memory associations and parallels with faces, for instance, with those of our relatives or friends and rarely fail to recognise a person we have met at a social event, even if only for the second time.

This intuitive knowledge, which deploys millions of permutations through specialised cells and circuitry to instantly tell us who we are talking to, is an amazing biological feat.

Even more awe-inspiring are the technological interventions that are trying to replicate this biological process. And as it usually happens with many other technologies, these are being deployed to create and sustain a surveillance system that has never been seen before.

Our facial features — scanned through every possible source — are being converted into a gigantic data pool. Using algorithms, millions of these faces can be compared and assessed to identify or verify a person if s / he is a culprit, a dreaded terrorist under disguise, a visitor in a protected area or a rioter. At our individual level, this technology has already been deployed to connect us better with our wider virtual world.

Face-recognition technology is becoming commonplace, used in most smartphones for unlocking. Several popular mobile applications, such as Instagram and Snapchat, use the technology to tag individuals and apply filters to photographs.

“While there is a range of facial recognition techniques, prevalent models rely on using an image to create a mathematical representation of a person’s face,” writes Smriti Parsheera in a research paper on facial recognition technology usage in India published by the National Institute of Public Finance and Policy, Delhi, in November 2019.

It is a fast-emerging market. The Allied Market Research group says that the global facial recognition market would grow annually at 22 per cent for the next two years to become a $9.6 billion trade.

In recent years, three-dimensional facial recognition devices have captured a significant market as retailers deploy them to gauge customers’ facial gestures and expressions to gain insights into their shopping behaviours.

“By assessing customers’ facial expressions and even bodily responses, retailers are able to gain better insights into consumer behaviour, even to the point where they can predict how and when a buyer might purchase their products in the future. This helps increase sales,” notes Supradip Baul, assistant manager of research at the Allied Market Research in a press statement. 

Where to draw the line 

On the face of it, the technology appears to be just another addition to the technologically perfect systems. But the world is waking up to its perils.

While many question the necessity of this technology, others have raised alarm as it can be used by governments to pervade privacy and intensify mass surveillance.

In India recently, we witnessed this being played out in a courtroom. One of the naturalised rights available to citizens of a democratic country is not to be considered or to be apprehended as criminals without the appropriate legal evidence.

For the February 2020 Delhi riots, however, all of us have been placed in the grey area between conscientious citizens and anarchic rioters.

If you are sure you were nowhere near the ‘crime’ scene, it does not imply that your data was not checked against the footage of the rioters collected through CCTVs (closed-circuit televisions) and drones.

Union Home Minister Amit Shah boasted in Parliament how the Delhi Police tapped into driving licence and voter identity databases to apprehend 1,900 rioters.

If you are not alarmed by the government’s indiscriminate screening of your personal data because you think the truth will prevail, then you should know that the technology that has been trusted with your life and freedom has an accuracy rate of less than 1 per cent.

It cannot even distinguish between boys and girls, claims an affidavit filed by the Union Ministry of Women and Child Development to the Delhi High Court in August 2019, which is less than a year before the riots.

Earlier in 2018, even the Delhi Police admitted in the high court that the accuracy of its facial recognition system was not more than 2 per cent.

The first level of facial recognition includes the detection of a human face from an image or video. Smartphone cameras use this to autofocus.

The second level involves creating a facial signature of individuals by extracting and cataloguing unique features of their face. These may include the length of the jawline, the spacing between the eyes, dimensions of the nose, mouth and ears.

At the final level, the facial signatures are compared with a database of human images and videos. As the steps increase, so does the complexity and the chances of error.

The life of the facial recognition software in India began benevolently with the aim to identify missing children. In those circumstances, an accuracy rate of even 1 per cent is admirable; one more child out of every 100 returned to the safety of their families. But the same statistics seem totalitarian and dystopian when they are capable of implicating citizens with criminality.

Despite the challenges, India is betting high on the technology and is on its way to create one of the world’s largest face recognition-based surveillance systems.

The National Automated Facial Recognition System, being developed by the National Crime Records Bureau (NCRB) that comes under Shah’s ministry, claims to automatically identify and verify criminals, missing persons, unidentified bodies and unknown traced persons.

The Union government in July last year released a request for proposal document to give project information to prospective developers.

The document has since been revised several times due to backlash from civil societies. Even the deadline for developers to enrol for the bidding has been extended over 10 times. It closed recently, on October 8, 2020.

The National Automated Facial Recognition System will have a searchable visual database of “missing persons, unidentified found persons, arrested foreigners, unidentified dead bodies and criminals based around dynamic police databases”. It will also have individual information, such as name, age, addresses and special physical characteristics.

The database will be accessible through mobile phones and will be available with the state police, along with the Union home ministry and NCRB. It can be accessed by 2,500 users at the same time.

The system will also provide matching functions based on images / visuals of modified facial features like plastic surgery, aged image, bearded faces, make-up, expression, hair-style, glasses, scar marks and tattoos.

The project document claims that it will “play a very vital role in improving outcomes in the area of criminal identification and verification” by “quick and timely information availability”. But not everyone is smitten by the government’s tall claim. 

Legal tangles

The proposed system has no legal backing, claims Internet Freedom Foundation (IFF), a non-profit in Delhi, which has recently issued notices to the Union home ministry and NCRB over the legality of the system

IFF's notice draws strength from the Supreme Court verdict in August 2017. While hearing the Justice K S Puttaswamy (Retd) and Anr v Union of India and Ors case, the Supreme Court had said that privacy constitutes a fundamental right under the Article 21 of Indian Constitution which ensures ‘right to life and personal liberty’.

It added that any interference in an individual’s privacy by the state should be done only in a manner that is “fair, just and reasonable”.

It explained that states can interfere with an individual’s privacy only if: it is supported by law; pursues a legitimate state aim; and is proportional to the objective.

The Information Technology Act, 2000, which classifies biometric data as a type of sensitive personal data, also has rules for the collection, disclosure and sharing of such information.

The checks mentioned in the Act cover only ‘body corporates’ and do not apply to the government’s use of biometric facial data.

The proposed surveillance system is also a disproportionate response as it requires the deployment of facial recognition technology on large segments of the population without their consent.

In a similar precedent, while rejecting the justification of countering black money as the basis for mandatory linkage of Aadhaar with bank accounts, the Supreme Court had noted that imposing such a restriction on the entire population, without any evidence of wrongdoing on their part, would constitute a disproportionate response.

The decision to have no legal framework means that the coercive technology is being implemented by the executive with no accountability, warns Divij Joshi, an independent lawyer and a Mozilla Tech Policy Fellow.

In the Aadhaar card case, the apex court had also noted that although the disclosure of information in the interest of national security cannot be faulted with, the power to make such decisions should preferably be vested in the hands of a judicial officer and not concentrated with the executive.

The apex court’s insistence makes sense as the chances of misuse are higher with the government because several of its departments, such as transport, already have high-resolution images of most citizens that can be used as a natural database for face recognition programmes and could easily be combined with public surveillance or other cameras in the construction of a comprehensive system of identification and tracking.

“We have already seen the fallout of that in December 2019 when the Delhi Police decided to use the technology against people who were peacefully protesting against the National Registry of Citizens,” Joshi says.

The right to protest, to publicly question and force the government to answer, is a fundamental right that flows directly from the Article 19 of the Constitution that ensures the right to freedom of speech and expression.

The other challenge is the tendency of government bodies to widen the application of such technologies to newer areas.

The Delhi Police, for instance, set up in 2017 a surveillance unit using facial recognition technology with the mandate of searching missing children and identifying bodies.

This happened on the orders of Delhi High Court in the Sadhan Haldar v The State NCT of Delhi case. A little later, the agency on its own widened the scope and started screening the people who would visit Red Fort on Independence Day to listen to the Prime Minister's speech.

Now it uses the technology for all kinds of surveillance. This shift from locating missing children to identifying rioters happened without any legal sanction or due planning and procedure.

Similar trends can also be seen in Telangana where the police recently used its surveillance system to track people suspected of the novel coronavirus disease (COVID-19).

Currently, five other police forces are using facial recognition technology in some form or the other. The list includes Punjab, Uttar Pradesh, Uttarakhand, Maharashtra and Tamil Nadu.

The distrust in civil society also stems from the fact that the government is trying to set up the surveillance system without prior discussion or consultation about the implications of the projects.

“Little is known about the criteria used for the selection of the technology partner, the security protocols that are in place and the accuracy of the results. Transparency about the use of FRTs (facial recognition technologies) becomes all the more important when it is used in the context of criminal investigations,” writes Parsheera.

Even the sketchy information available in the request for proposal document released by NCRB to attract bidders shows how coercive the system could be.

In the first draft of the document, released in July 2019, NCRB claimed that the database will automatically collect information from CCTV cameras installed across the country.

While the government has dropped this clause after backlash from civil society groups, it has found a way around. The latest document, released on June 22 this year, says the scene of crime images / videos is one of the data sources. “This is in all likely a proxy for CCTVs,” says Anushka Jain of IFF.

The system will also integrate the existing databases available with various state police departments. These state-level databases rely highly on CCTVs and as a result, the national surveillance system will indirectly have access to them.

Delhi is currently in the process of setting up about 300,000 CCTV cameras, in addition to the 250,000 cameras already being operated by the Delhi Police. One of the stated objectives of this expansion project is to secure the safety of women.

“This illustrates a problematic link between the use of surveillance technologies as a tool for delivering gender justice. In reality, such technologies often end up having a disproportionate negative impact on women and other marginalised groups,” writes Parsheera in her paper.

The logic of this argument runs parallel to the Delhi Police’s initial claim that its surveillance system will be specifically used for identifying missing children and bodies.

Studies show that even highly accurate facial recognition algorithm technologies struggle with children as their facial signature keeps on changing. They also have limited success with bodies that are usually mutilated or decomposed.

When law enforcers will have access to the national surveillance system, they would be able to conduct three kinds of searches: 1:1, 1:N and N:N combinations. In 1:1 search, users can check the credentials of a specific individual with his / her information on the database.

In the 1:N, the individual will be searched in the entire database for a match. The N:N search is contentious as it will allow the identification of multiple people from a protest site or similar places. This can be used for identifying certain kinds of people based on their community or culture.

The chances of misuse increase manifold as all law enforcers will have access to this search type.

“NCRB database can not only exacerbate biases already present in the policing force but become a source of new ‘automated’ discrimination. This is both because it can provide a basis for police decisions which can realistically affirm personal biases, and because the software itself may systematically discriminate based on facial features,” warns Joshi.

He adds that the country has had a long history of racist and casteist police databases. “There are no specific studies of this in India. It also means that there are no systems for auditing these technologies for such biases.”

Another allied challenge is the ‘other race effect’. Humans can easily identify people from their own region or race but struggle with people from other races. This bias is also found to ail facial recognition algorithms.

All facial recognition surveillance systems require three kinds of datasets: a training dataset to prepare the system, a dataset of the suspected people usually called ‘people of interest’ and finally the master image database.

The fear is that in a country as diverse as India, in terms of race and ethnicity, if the ‘other race effect’ creeps into the system, correcting it will become an uphill challenge.

Global obsession

Without legal safeguards, facial recognition technology is set to undermine democratic values. In January this year, the US recorded its first case of misidentification by facial recognition technology

Robert Julian-Borchak Williams was arrested by the Detroit police department after he was misidentified as a robber involved in the theft of a watch shop in 2018. Williams was kept in custody for over 30 hours and then released with a simple apology.

This is the biggest fear as most countries including India and the US lack the legal framework that can bring accountability into the system.

US researchers Joy Buolamwini and Timnit Gebru found in 2018 that three of the major commercial facial recognition technologies misidentified darker-skinned females with a maximum error rate of 34.7 per cent as compared to the maximum error rate of 0.8 per cent for lighter-skinned males.

The National Institute of Standards and Technology, which maintains technology standards in the US, in December 2019 found a racial and gender bias in most top facial recognition technology systems in the country.

Almost 85 per cent of countries with facial recognition systems employ it for surveillance, suggests the Artificial Intelligence Global Surveillance Index 2019.

The index, released by the Carnegie Endowment for International Peace, found that facial recognition systems were in place in 75 countries.

The idea of facial recognition gained traction in the 1960s when Woody Bledsoe, a bishop and co-founder of Panoramic Research in Palo Alto, found a way to manually input a person’s facial features into a computer that would then search for matches based on the distances between eyes, mouths, tips of noses and hairlines.

The accuracy improved in the 1970s as researchers drew on more facial markers, such as lip thickness. But the real progress came in the 1980s and 1990s with new methods to locate a face in an image and extract its features, making fully automated facial recognition possible.

The most radical advances came from 2010 onwards when deep neural networks helped in the mastery of face recognition. In 2011, the technology helped confirm the identity of Osama bin Laden when he was killed in a US raid.

Facebook rolled out the technology for photo tagging and in 2014, its DeepFace program became the first to reach near-human performance in face recognition.

With the improvement in technology, came the obvious exploitation. China — which currently has the largest facial recognition system in place—used it to identify and target pro-democracy protestors.

It has also set up a system to track minority communities like Uighur Muslims, mostly living in the Xinjiang province. One particular system alerts the authorities if individuals move beyond a designated 300-metre circle.

The most recent example remains of the US, where the authorities used the technology to hunt down protestors during the Black Lives Matter agitations, triggered by the custodial death of a citizen in Minnesota. This abuse has made several Silicon Valley companies backtrack.

IBM, for instance, has closed its facial recognition technology division. Amazon has put a moratorium on the technology for a year. Microsoft has announced it will not sell its facial recognition technology to the police in places without federal regulation.

But corporate restraint will not prevent the abuse of this technology — even if some firms stand down, others are eager to step in. Japan-based NEC Corporation and US-based Clearview, two regular vendors for the police departments across the world, continue to do business as usual.

Corporations are also expanding the scope of facial recognition to study and predict human behaviour. Israeli company Faception has made a software that it claims can read an individual’s face and predict his / her intelligence quotient — and inclination to commit a crime or terror attack. Small wonder, governments are already acquiring the technology.

In fact, China is experimenting with a social credit system where citizens who tow the government’s policies will be rewarded for their ‘good behaviour’ while detractors will be penalised for their ‘bad behaviour’. And there is no prize in guessing that the nationwide screening will be done with the help of facial recognition technology.

This was first published in the 16-31 October, 2020 edition of Down To Earth

Subscribe to Daily Newsletter :

Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.