University of Toronto graduate student Avishek "Joey" Bose, under the supervision of associate professor Parham Aarabi in the school's department of electrical and computer engineering, has created an algorithm that dynamically disrupts facial recognition systems. The project has privacy-related and even safety-related implications for systems that use so-called machine learning -- and for all of us whose data may be used in ways we don't realize. Major companies such as Amazon, Google, Facebook and Netflix are today leveraging machine learning. Financial trading firms and health care companies are using it, too -- as are smart car manufacturers. What is machine learning, anyway?
I feel that there was a sort of explosion a couple of years ago after which the whole topic of Artificial Intelligence (AI) suddenly sprang into a wider audience's consciousness. All of a sudden we had Siri, Amazon's Alexa and we started talking about self-driving cars. Jaan Tallinn, how did it happen? There were two different explosions. I believe that a lot of the latter had to do with the works of Elon Musk and Stephen Hawking. Most importantly, the former was the revolution of deep learning.
The arms race in Silicon Valley is on for artificial intelligence. Facebook is working on a virtual personal assistant that can read people's faces and decide whether or not to let them in your home. Google is investing in the technology to power self-driving cars, identify people on its photo service and build a better messaging app. Now Apple is adding to its artificial intelligence arsenal. The iPhone maker purchased Emotient, a San Diego maker of facial expression recognition software that can detect emotions to assist advertisers, retailers, doctors and many other professions.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).