Facial Recognition: More than Just A Pretty Face

loading

FINGERPRINTS ARE INIMITABLE identity markers. In forensic science, the ridges forming the lines on the human finger leave impressions that are fairly unique, unalterable and durable, making this an ideal technique for human identification and verification.

Facial recognition has also become a viable alternative for identification with the advent of high definition digital cameras and mobile smartphones, offering advantages of remote detection and especially when the subject is on the move. Granted, not everyone has obvious fingerprints, but we do have facial features that are distinguishable unless they have been altered with extensive cosmetic surgery.

Both of these require the use of biometrics, which involves measuring and calculating distinctive body and behavioural characteristics to identify, label or describe individuals. Retina and facial recognition boast security benefits, but they are also susceptible to privacy concerns.

Secret History

Sixty years ago, Woody Bledsoe1 , the son of a sharecropper, along with Helen Chan and Charles Bisson, worked on giving computers a precariously powerful human capacity – the ability to recognise faces and identify individual people on a mass automated scale. This feat did not receive much publicity as the project was funded by some of the most top secret US intelligence agencies at the time.

Bledsoe’s early research techniques were based on marking various face landmarks such as the eye, nose, mouth, forehead and chin. The distances between these nodal points were then measured, calculated and compensated for pose variations, forming the geometry of the face known as the “face signature”. This face map is then compared with images in the database of mugshots, photos, etc. to determine the subject’s identity. Early challenges faced were due to great variability in head rotation and tilt, light intensity and angle, facial expression and aging.

By 1997, funding from the US Army Research Lab to the University of Bochum, Germany and the University of Southern California produced the Bochum System, which was sophisticated enough to overcome identification impediments, e.g. beards, glasses and changed hairstyles. The system was sold commercially to Deutsche Bank, to operators of airports and for use at busy sites requiring enhanced security.

Advances and Adoption

Over the last two decades, facial recognition has become mainstream largely due to the convergence of technologies such as digital cameras, faster computing power, image database storage and more importantly, advances in machinelearning algorithms.

Have you ever wondered how your iPhone and Facebook know that it’s you? In facial recognition, the trick lies in matching the face signature against as large and as varied a dataset as possible to avoid false positives. If the database contains a variety of images of you in various angles, lighting, expressions and clothing, chances are much better that the machine will be accurate in identifying your face. Each time you unlock your phone using a selfie, the phone learns to recognise you in each of the different environmental settings, enabling it to become more precise over time.

Why is Facebook always asking you to tag yourself in photos? By doing this, Facebook is training its algorithm to learn how you identify and classify photos of yourself thereby increasing its accuracy rate. Snapchat’s “lens” are selfie filters that superimposes silly and funny caricatures onto faces using advanced computer vision and facial recognition. Today, facial recognition systems can even differentiate between twins, but they still have challenges in identifying human faces when they are only partially visible, shrouded in shadow or covered by clothing.

Yuval Noah Harari concludes that in the future, “if Kindle is upgraded with face recognition and biometric sensors, it can know what made you laugh, what made you sad and what made you angry. Soon, books will read you while you are reading them.”2

Facial recognition technology is near perfect, if it were not for deep fakes. Deep fake is an AI software that alters images and video content through superficial and undetectable tampering, and presenting them with an almost imperceptible realism. This means that a deep fake is able to render facial recognition vulnerable by modifying images to fool the system into generating more false positives in identification.

Concerns and Controversy

As humans, we identify people of our own race better than the rest; and just like humans, machines are accustomed to familiarity. Data are the basis for matching in facial recognition systems; its accuracy therefore depends on the variety and quality of data it holds. MIT researchers3 have found that “Amazon’s Rekognition system mistakenly identified pictures of women as men 19% of the time, and of darker-skinned women as men 31% of the time. By comparison, Microsoft’s offering misclassified darker-skinned women as men 1.5% of the time.”

This has led to concerns about these systems exacerbating bias and misidentifying people, leading to wrongful conviction, a lack of regulations to protect individual privacy, and a lack of accuracy and accountability in the supply and use of the technology.

Based on the “creative commons” online copyright agreement, anyone can copy and reuse images for academic and commercial purposes. The company Clearview AI, belonging to Australian Hoan Ton-That, has managed to comb the web and social media to compile a staggering 3 billion images with links showing their respective sources. This database is now used by over 600 law enforcement agencies and has been able to eerily identify activists in a protest, where they live, who they know, and what they did. Such weaponisation of data is frightening.

IBM recently announced that it will end its research in facial recognition technology, while Google is favouring a temporary ban. Amazon and Microsoft have also limited their engagement with law enforcement agencies and Facebook paid a USD550mil settlement on facial recognition technology.What brought on this flurry of decisions?

In part, it was triggered by the killing of George Floyd by policeman Derek Chauvin, raising awareness levels about the biasness in racial profiling and the extent of which law enforcement agencies have access to photos of over half the American population, which could potentially lead to abuse of power.

A Two-edged Sword

World events like 9/11 triggered investments in technology to enhance security and terrorist threat detection. Likewise, and despite concerns of privacy infringements, Covid-19 is again justifying the case for facial recognition through surveillance cameras that track and trace movement in order to mitigate more damaging waves of outbreaks and contagions.

While US companies have exercised self-restraint, China has, at the expense of civil liberties, leapt ahead in this technology rat race. At the extreme end of surveillance, China is using facial recognition to detect behaviours deemed socially unacceptable such as jaywalking, posting fake news, buying too many video games, etc. to tally into the social credit system Sesame Credits, with its corresponding reward and punishment to enable compliance.

Critics of the ban and the moratoriums in the US have argued that by only focusing on the negative downsides, the positive benefits from the technology, including finding missing persons and fighting terrorism, will be lost. If the ban is realised and becomes permanent, players from other countries like NEC , Idemia and Thales will still be shipping such technology to law enforcement agencies, and China will continue pressing forward. Domestic suppliers will be the biggest losers in the end. When transistors were introduced in the 1960s, there were unfounded fears of widespread eavesdropping. Building on this, proponents argue that succumbing to facial recognition bans as a result of perceived harm from deep surveillance will also extend to newer technologies.

Technology is a double-edged sword and an unstoppable force. No Luddite can stop its advancements; and we can only adapt through pragmatic regulatory policies and making rational trade-offs, without compromising ethical norms and morality.

1 https://www.wired.com/story/secret-history-facial-recognition/
2 Homo Deus: A Brief History of Tomorrow, Yuval Noah Harari, Professor of History, Hebrew University
3 Face recognition researcher fights Amazon over biased AI, by MATT O'BRIEN April 4, 2019 apnews.com
Tony Yeoh is the CEO of Digital Penang.



Related Articles

COVID-19 EXCLUSIVES