Skip to main contentSkip to navigationSkip to navigation
facial recognition
Facial recognition data points: 'While facial recognition algorithms may be neutral themselves, the databases they are tied to are anything but.'
Facial recognition data points: 'While facial recognition algorithms may be neutral themselves, the databases they are tied to are anything but.'

Facial recognition: is the technology taking away your identity?

This article is more than 9 years old
Facial recognition technology is being used by companies such as Tesco, Google and Facebook, and it has huge potential for security. Concerned? It may be too late to opt out…

This summer, Facebook will present a paper at a computer vision conference revealing how it has created a tool almost as accurate as the human brain when it comes to saying whether two photographs show the same person – regardless of changes in lighting and camera angles. A human being will get the answer correct 97.53% of the time; Facebook's new technology scores an impressive 97.25%. "We closely approach human performance," says Yaniv Taigman, a member of its AI team.

Since the ability to recognise faces has long been a benchmark for artificial intelligence, developments such as Facebook's "DeepFace" technology (yes, that's what it called it) raise big questions about the power of today's facial recognition tools and what these mean for the future.

Facebook is not the only tech company interested in facial recognition. A patent published by Apple in March shows how the Cupertino company has investigated the possibility of using facial recognition as a security measure for unlocking its devices – identifying yourself to your iPhone could one day be as easy as snapping a quick selfie.

Google has also invested heavily in the field. Much of Google's interest in facial recognition revolves around the possibilities offered by image search, with the search leviathan hoping to find more intelligent ways to sort through the billions of photos that exist online. Since Google, like Facebook wants to understand its users, it makes perfect sense that the idea of piecing together your life history through public images would be of interest, although users who uploaded images without realising they could be mined in this manner might be less impressed when they end up with social media profiles they never asked for.

Google's deepest dive into facial recognition is its Google Glass headsets. Thanks to the camera built into each device, the headsets would seem to be tailormade for recognising the people around you. That's exactly what third-party developers thought as well, since almost as soon as the technology was announced, apps such as NameTag began springing up. NameTag's idea was simple: that whenever you start a new conversation with a stranger, your Google Glass headset takes a photo of them and then uses this to check the person's online profile. Whether they share your interest in Werner Herzog films, or happen to be a convicted sex offender, nothing will escape your gaze. "With NameTag, your photo shares you," the app's site reads. "Don't be a stranger."

While tools such as NameTag appeared to be the kind of "killer app" that might make Google Glass, in the end Google agreed not to distribute facial recognition apps on the platform, although some have suggested that is no more than a "symbolic" ban that will erode over time. That is to say, Google may prevent users from installing facial recognition apps per se on Glass but it could well be possible to upload images to sites, such as Facebook, that feature facial recognition. Moreover, there is nothing to prevent a rival headset allowing facial recognition apps – and would Google be able to stop itself from following suit?

Not everyone is happy about this. US senator Al Franken has spoken out against apps that use facial recognition to identify strangers, going so far as to publish an open letter to NameTag's creators. "Unlike other biometric identifiers such as iris scans and fingerprints, facial recognition is designed to operate at a distance, without the knowledge or consent of the person being identified," he wrote. "Individuals cannot reasonably prevent themselves from being identified by cameras that could be anywhere – on a lamp post, attached to an unmanned aerial vehicle or, now, integrated into the eyewear of a stranger."

To proponents of facial recognition, of course, this is precisely the point. Like the club doorman who knows you by name and can spot you in a busy crowd, facial recognition can make everything that bit more personal. In Steven Spielberg's 2002 sci-fi film Minority Report, ads are made more personal by using facial recognition technology. As Tom Cruise's character walks down the street, he is bombarded with customised adverts for everything from new cars to alcoholic drinks. In 2014, a number of companies are already bringing these ideas to (digital) life. Late last year, Tesco announced plans to instal video screens at its checkouts around the country. These screens will use inbuilt cameras equipped with facial recognition algorithms to ascertain the age and gender of individual shoppers.

tom cruise minority report
Personal targeted advertsing in Spielberg's Minority Report starring Tom Cruise.

A Californian startup called Emotient meanwhile focuses on the area of facial expression analysis. Incorporated into next-generation TVs by way of a webcam, this technology could potentially be used to monitor viewer engagement levels with whatever entertainment is placed in front of them. The answer to questions such as "how many times did your face register interest during a programme?" can then be fed back to television companies to help them make creative decisions concerning programming.

"It is time for a step-change in advertising," says Lord Sugar's son, Simon, chief executive of Amscreen, which developed the OptimEyes technology behind Tesco's facial recognition screens. "Brands deserve to know not just an estimation of how many eyeballs are viewing their adverts, but who they are, too. Through our Face Detection technology, we want to optimise our advertisers' campaigns, reduce wastage and in turn deliver the type of insight that only online has previously been able to achieve."

Putting aside the question of whether or not brands do "deserve" to know anything and everything about their customers, companies such as OptimEyes and Emotient are far from the creepiest application of facial recognition. In the US, the startup SceneTap (previously known as BarTabbers) has installed cameras in more than 400 bars; they use facial recognition to help bar-hoppers decide which locations to visit on a night out. SceneTap offers real-time information on everything from gender ratios to the average age of patrons. A patent filed by the company even suggests plans to link identified people with their social networking profiles to determine "relationship status, intelligence, education and income".

Although the use of facial recognition tools is still relatively new in the consumer sector, that is where much of the visible innovation will take place over the coming years. "The stakes are lower, so companies are free to take more risks," says Kelly Gates, professor in communication and science studies at UC San Diego and author of Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance. "As a result, there are a lot of experiments in the commercial domain. So what if you identify the wrong person by accident when you're targeting an ad? It's not that big a deal. It happens all the time in other forms of advertising."

mohamed atta
Mohammed Atta (right) in the airport surveillance tape from Portland, Maine, 11 September 2001. Photograph: Reuters

There are, naturally, problems, and most relate to privacy concerns. Although privacy is an issue with every form of data mining, at least online the majority of information absorbed by companies is anonymised. Facial recognition, of course, is precisely the opposite. And since facial recognition takes place in public spaces, it is not even necessary for the person surveilled actively to "opt in" to the service.

This, in turns, links to the subject of security, which for many companies and organisations is the ultimate application for facial recognition. Hitherto, most facial recognition research has been funded by governments interested in its potential for streamlining surveillance. That emphasis has only increased over the past decade, provoked by events such as the 9/11 attacks and the 7/7 London bombing in 2005.

One of the most poignant images that came out of 11 September was a grainy frame of surveillance tape footage showing hijacker Mohamed Atta as he passed through an airport metal detector in Portland, Maine. Unlike the horrifying images of the collapse of the Twin Towers, this quieter picture was dramatic because of what it implied: that if only the right technology had been available, that day's tragic could have been averted.

The idea that data mining algorithms have any place in helping us stop the next 9/11 or 7/7 has been criticised in some quarters. But there is no doubt that facial recognition plays an ever more important part in control and surveillance – both in England and overseas. On 5 April 2011, 41-year-old John Gass received a letter from the Massachusetts Registry of Motor Vehicles informing him he should stop driving, effective immediately. A conscientious driver who had not received so much as a traffic violation in years, Gass was baffled. After several frantic phone calls, followed up by a hearing with registry officials, Gass learned his image had been flagged by a facial recognition algorithm, designed to scan through a database of millions of drivers' licences looking for potential criminal false identities. The algorithm had determined that he looked sufficiently like another Massachusetts driver that foul play was likely involved, so he received the automated letter. The RMV was unsympathetic, claiming it was the accused individual's "burden" to clear their name in the event of any mistakes, arguing that the pros of protecting the public outweighed the inconvenience to the wrongly targeted few.

"The dream is for governments to be able to set up networked cameras in public locations, capable of constantly searching through the faces of people who are photographed," says Xiaoou Tang, professor in the department of information engineering at the Chinese University of Hong Kong and one of the world's leading experts in facial recognition. "Once this is done, the images can then be matched to a database looking for suspects or potential terrorists, so that [pre-emptive] arrests can be made."

Perhaps the most notable thing about our faith in facial recognition is what it says regarding belief in the inherent neutrality (or even objectivity) of such systems. "One of the things that troubles me is the idea that machines don't have bias," says Gates. Of course, in a real sense, they might not. Unless a programmer is personally prejudiced and decides deliberately to code that bias into whatever system he or she is working on, it is unlikely that a facial recognition algorithm will exhibit prejudice against certain groups for the reasons that a human might.

But that doesn't mean that prejudice can't occur. It could be, for example, that facial recognition tools show a higher rate of recognition for men than for women and for individuals of non-white origin than for whites. (This has been shown to be true in the past.) A facial recognition system might not target a black male for reasons of overt prejudice in the way that a racist person might, but the fact that it could be more likely to do this than it is to target a white female means that the biased end result is no different.

And while facial recognition algorithms may be neutral themselves, the databases they are tied to are anything but. Whether a database concerns criminal suspects or first-class travellers, they are still designed to sort us into categorisable groups.

"These databases are what define our social mobility and our ability to move through the world," says Gates. "Individual identification is always tied to social classification. It's always there for some specific purpose, and that's usually to determine someone's level of access or privilege. The ethical questions in facial recognition relate to those social hierarchies and how they're established."

"I think it worries people because there's something very permanent about it," says Xiaoou Tang. "Even when you're talking about using your face or your fingerprints to unlock a phone, this is a password we can never change. We only have one, and once it's set up it's going to be your password for life."

This isn't to suggest that facial recognition doesn't have its positives. As computer vision continues to get better over the coming months and years, we'll reap the benefits as computer users. The idea that we can take the giant, anonymous world we live in and transform it into a place as knowable as a small town is, at root, a utopian/naive one. "Ultimately we need to ask ourselves whether a world of ubiquitous automated identification is really one that we want to build," says Gates.

It's important to understand the scale of change that is under way, because it is going to dictate what happens. Knowing about facial recognition, and how it is used by both governments and companies, is key to helping us face the future. No pun intended.

Luke Dormehl is the author of The Formula: How Algorithms Solve All Our Problems (And Create More), published by WH Allen £20

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed