Last time I wrote about Biometrics and Privacy. Part of that involved the training of the networks needed to recognize individuals either for authentication or in images.
A good description of the process of "training" the networks to recognize the images is the third part of the Medium series I referenced then. As the article notes, one important part of effectively training a network is data. The more, the merrier. The author of the article comments, "In machine learning, having more data is almost always more important that having better algorithms. Now you know why Google is so happy to offer you unlimited photo storage. They want your sweet, sweet data!".
Some have argued that this kind of training is wrong because then the police could more quickly discover whether or not some "Waldo" is in a particular image. The kicker is the "more quickly" part. They could also hire, say, a gaggle of lookers through Amazon Mechanical Turk to do the looking for them. Once some networks are better trained to recognize people, it might be fun to compare the results of using Mechanical Turk with the results from the trained nets.
So let's get down to some real nitty-gritty. Consider an augmented reality app for a smartphone. You walk down the street and point it at people, and it says "Carlos Sanchez" or "Mary Lund". There is already an open source one, but it takes multiple images of the individual to be able to do the recognition. Real-time access to a recognition app might be fun or scary. I don't want people I've never met, pass me on the street and say, "Hi, John". Or imagine if they could find more data and know that I'm visiting the city where they see me and I'm not at home. Maybe nobody is home. Maybe it is vulnerable to a burglary. That's a bit far-fetched with current privacy settings, but it could become a reality.
What do you think? Is this a privacy issue or not? Start the conversation in the comments below.
To your safe computing,
Cyber Security Training