Movies and television shows are full of scenes where individuals are recognized in huge crowds using facial recognition software. Until recently, some of those seemed far-fetched or maybe prohibitively expensive. That is no longer the case.
Sahil Chinoy described a facial recognition demonstration in The New York Times. He and his team built a sub-100 dollar facial recognition system using public New York cameras, Amazon's Rekognition service, and photos collected from public websites. Their experiment was not real-time, but the results are impressive, especially considering the price and easy construction of the system.
National Public Radio (NPR) reported last year on Chinese use of real-time facial recognition and US law enforcement's potential use. The possibility of recognizing a face in a live feed of a crowd with high accuracy is impressive, but it also holds the potential for abuse. A bad actor could conceivably build a quality system on a shoestring budget. And if you feel safe because you don't think someone has your face, you probably shouldn't: according to an article in The Atlantic back in 2016, half the faces of Americans were in police facial-recognition databases. And the AP reported on June 4th of 2017 that the FBI may have 640 million faces with names.
Anyone unsure of how to construct such a system need only consult Amazon's AWS Guide: "Build Your Own Facial Recognition Service Using Amazon Rekognition". I doubt it will be long before one can ask Alexa or another personal assistant, "Who is at the door?" and receive a highly accurate response. In a restaurant or store with publicly accessible cameras (or with a covert camera worn or installed by the user), one could conceivably query the names those dining or shopping there. And the restaurant could contact the customer later to remind her to post a good review on social media.
False Positives and False Negatives
When biometrics are used - and this is a biometric technique - the issues of false positives and false negatives must be considered. A false positive is recognizing someone as "John Doe" when he is not, or missing "Jane Doe" when she is indeed in an image. The former is an especially significant issue when it comes to law enforcement use of facial recognition: imagine being recognized as a sought-for murder suspect. This has lead communities to consider banning the use of such systems by law enforcement.
False negatives could also be an issue when looking for terrorists at public events. Of course, the system of using humans to search crowds for potential evil-doers has lots of false negative issues, too.
Research into the accuracy of facial recognition shows that it could be accurate up to 99% for "white males" and maybe only 65% for women of color. This is potentially due to the datasets used in training the AI of the recognition tools. If the datasets are predominantly white males, the recognition will be more accurate for that group.
Legal And Ethical Concerns
Legal scholars and ethicists are debating the value of facial recognition versus the social implications of the technology. Is it a violation of the Fourth Amendment to the US Constitution? Is there a right to be anonymous in public? What legal limits should be placed on its use?
Facial recognition has great promise for authentication in computer systems from a cyber security standpoint. Sitting aa desktop or looking at a phone provide valuable alternatives to the passwords often used today. The use of facial recognition for finding people in crowds has meant more research into the technology and improved systems. My hope is the systems will become more reliable for all users and be used responsibly,
To your safe computing,