= $dataArray['content_title']; ?>

AI can see you

A sophisticated and frightening face-recognition algorithm being employed by technology firms could be the end of privacy as we know it

 
By Rohini Krishnamurthy
Published: Thursday 07 March 2024

When Kashmir Hill, a technology journalist at The New York Times, tried on a pair of chunky augmented reality glasses in 2021, she caught a glimpse of what a sophisticated technology could unleash on a world that is not prepared to handle it. The glasses were no ordinary gadget; they revealed to the wearer the identity of any individual seen in the frame. They also displayed photos of the person uploaded on the internet, along with data on where they were clicked.

Powering these augmented reality glasses is an artificial intelligence algorithm designed by Clearview AI, a New York city-based facial recognition technology startup, founded in 2017. The company would go on to sign a contract with the US Air Force to research augmented reality glasses to help with security on military bases. Hoan Ton-That, an Australian national who is the CEO and co-founder of the company, tells Hill that the idea behind the gadget was to help soldiers decide whether someone standing at a distance of 15 m was dangerous or not.

But concerns over the technology’s possible misuse are largely ignored by the entrepreneur, who is keen on convincing people why they need to make their faces available for the world to see. This is something that Hill effectively brings out in her book Your Face Belongs to Us.

The book is largely a documentation of Hill’s investigation of Clearview AI, except for two chapters that are dedicated to Facebook and Google’s entry into the facial recognition space, and their exit from it, fearing repercussions. Hill also makes an interesting point about how Clearview AI managed to beat Google and Facebook in the game despite these companies possessing vast databases on people’s photos and names. These giants, she says, were wary of the consequences while Clearview AI, being a new entrant, did not have much to lose.

Hill’s first encounter with Clearview AI was in 2019 when the startup was making a splash with law enforcement agencies in the US. A tip about a mysterious firm capable of identifying a person with just a snapshot of their face caught her attention. After relentless sleuthing, multiple dead ends and several interviews of people involved in the business, Hill managed to lift the veil on the startup, which would have otherwise preferred to work minus media scrutiny.

Illustration: Yogendra Anand

The author gives her readers a peek into the tech entrepreneur’s mind by tracing Ton-That’s past—from his growing-up years in Melbourne, Australia, where he taught himself to code at the age of 14 years to his move to San Francisco, US, where he dabbled with building apps. He finally tasted success with Clearview AI, which he co-founded with Richard Schwartz, an aide to Rudolph W Giuliani when he was mayor of New York. Schwartz helped Ton-That secure investments.

In a blog published on Clearview AI’s website, Ton-That passes off facial recognition technology as something innocuous by calling it “Google for Faces”. He goes on to say that his algorithm uses publicly available photos, instead of keywords, to provide results. To identify a person, the technology compares an uploaded mugshot with a database of 20 billion photos that were scrapped from sites like Facebook—without the consent of users. All of this was to sell the software to private companies, hotels, luxury condos or retail stores to keep out thieves or strangers. In 2020, the company changed gears, finding its calling in fighting crimes—from petty violence and property disputes to human trafficking and child abuse. The company then decided to sign contracts exclusively with governm-ents and law enforcement agencies.

Though Clearview AI’s algorithm was rated by the National Institute of Standards and Technology, an agency of the US Department of Commerce, as the world’s second-most accurate in recognising faces in the country, it is far from perfect. For instance, Hill explains that a 28-year-old African American man in Georgia was wrongfully arrested after Clearview AI’s algorithm mistook him for another person who used stolen credit cards at consignment stores in Louisiana. This was hardly the first case of an innocent person paying the price for humans making decisions based on a not-so-perfect technology. Facial recognition systems are notorious for making errors while identifying African Americans and women.

Hill also highlights a case of a technology enthusiast using facial recognition software to trace the real names and addresses of adult film actors. No one could have stopped the individual from stalking, hurting or assaulting these women he found on pornographic websites. In the wrong hands, the technology can lead to more crimes.

Ton-That explains that his algorithm was used to help identify the people who attacked the Capitol on January 6, 2021, to disrupt the affirmation of the US presidential election results. Government agencies in Ukraine have been said to be using this technology against the Russian invasion. But it is quite evident that the technology is imperfect and even dangerous in absence of regulations. Hill’s book also raises concerns over whether firms like Facebook will join the bandwagon again in the future. She warns that the social networking giant has not ruled out the possibility of using the facial recognition algorithm in augmented reality glasses.

The author leaves her readers with a question on what it means to maintain privacy in the modern world: “Information that you give up freely now, in ways that seem harmless, might come back to haunt you when computers get better at mining it.” This is something that users, companies and governments need to ponder over right now.

This was first published in the 1-15 March, 2024 print edition of Down To Earth

Subscribe to Daily Newsletter :