Download now: The Downfall of Cable, and the Rise of 5G!

How Risky Is Facial Recognition?

Written by Monica Savaglia
Posted June 16, 2020

Artificial intelligence (AI) has been one of the most talked about technologies of the past few years. Under the umbrella of AI is facial recognition — something that has been very controversial. The discussion surrounding facial recognition and the role it will play has increased as this technology becomes more advanced. 

Some people use facial recognition to unlock their smartphone, and, at that level, it doesn’t seem too controversial. That might be the extent to which most of the public thinks about facial recognition.

How harmful can the technology be if it’s just being used to unlock a phone? Well, if we’ve learned anything from technology companies and innovative technologies, it’s that they push the boundaries hard and usually reap the benefits years later. 

Decades ago, you wouldn’t have thought downloading certain apps to your phone and allowing them access to your phone’s microphone or photos meant giving the owners of those apps the ability to use your data. 

No one was talking about the potentially detrimental impact of entering our personal data into online forms. However, what we gave access to decades ago has had a significant effect on us, as technology companies sold that information to advertisers and others who have used it to their advantage.

Right now isn’t the time to be naive. It's understandable that more than a decade ago we allowed these tech companies to access our data — it was new technology and we were excited about the capabilities. We never thought it would be used "against us." And that’s why it’s important to understand the technologies and innovations being introduced today.

We know that artificial intelligence is going to pave the way for some really helpful technologies, things that will make our lives easier. And AI facial recognition has the chance to be beneficial as long as it’s in the “right hands,” but it’s hard to determine whose hands are “right.” People are naturally wary of the technology and the power it could give a company or organization. 

AI researcher Ken Bodnar has said:

AI face recognition technology is damn good, but it is not very robust. This means that the neural network is well trained and capable of amazing feats of identification, but if one little parameter is off, it misidentifies you. 

He went on to explain the technology further, saying:

The way that it works is that everything is a probability with AI. So when it looks at a face, it has a range of proprietary algorithms and parameters it measures. The most accurate AI tools are Deep Belief Networks that winnow out features like double chins, eye distance, hair type, bushy eyebrows, fat lips, age parameters etc. But the "not-very-robust" categorization means that it is easy to fool because of the intrinsic nature of the way that neural networks work.

This week, IBM (NYSE: IBM), Microsoft (NASDAQ: MSFT), and Amazon (NASDAQ: AMZN) all announced that they would be suspending the sale of their facial recognition technology to law enforcement agencies. The fear is those agencies could abuse the power of the technology and invade individuals' privacy. Protests for justice and equality are still happening worldwide and putting this type of technology in the hands of law enforcement agencies could put protestors in danger of being targeted or investigated.

Not only that, but the technology hasn’t been proven to be entirely accurate. Right now, the technology has difficulties when it comes to analyzing videos and images of minorities. And when you’re talking about having the police use the technology to aid arrests, it becomes problematic.

Dropping plans to sell facial recognition tech to police departments was a smart move for these major technology companies. At a time when policing has become heavily criticized, IBM, Microsoft, and Amazon decided to not be a part of the problem and risk having blame shifted to them in the years to come when issues arise. However, those plans could change when the public becomes less focused on policing. 

Michal Strahilevitz, a professor of marketing at St. Mary’s College of California, said:

As for the issues with the technology, a study out of MIT last year found that all of the facial recognition tools had major issues when identifying people of color. Another study out of the U.S. National Institute of Standards and Technology suggested facial recognition software had far more errors in attempting to recognize Black and Asian faces than it had in recognizing Caucasian ones. This means that black and brown people are more likely to be inaccurately identified, and thus unfairly targeted. This may not be intentional, but it ends up having a racial bias that is dangerous and unethical.

The issues surrounding the technology need to be ironed out before major companies sell facial recognition programs to law enforcement agencies or other organizations. 

This technology is powerful, and it could do some good in the world, but patience and more understanding of its capabilities is important — especially when it comes to privacy rights. There will most definitely need to be regulations put in place to protect citizens' rights. 

Until next time,

Monica Savaglia Signature Park Avenue Digest

Monica Savaglia

Monica Savaglia is Wealth Daily’s IPO specialist. With passion and knowledge, she wants to open up the world of IPOs and their long-term potential to everyday investors. She does this through her newsletter IPO Authority, a one-stop resource for everything IPO. She also contributes regularly to the Wealth Daily e-letter. To learn more about Monica, click here.

Buffett's Envy: 50% Annual Returns, Guaranteed