Facial recognition technology has grown in prevalence, and today you can find it in different areas of human activity, including social media, smart homes, ATMs, and stores. Recently, researchers have discovered that AI algorithms are prone to adversarial attacks which involve changing an image and staying undetectable to the human eye. While the use of facial recognition systems provides less physical contact during the pandemic, this technology is far from perfect in terms of cybersecurity. Our objectives were to find out whether threats described in academic papers actually exist and how real AI solutions can be attacked.
We will show the details of a AI Red Teaming engagement for facial recognition software and hardware solutions to identify the most critical ways of attacking facial recognition engine.
To be able to make a proper test, we created our own attack taxonomy and evaluated the effectiveness of recent approaches to attacking facial recognition systems. We will present our own research conducted in the real environment with different cameras and algorithms.