Goto

Collaborating Authors

Facial Recognition: the Advent of a New Era in Non-Digital Marketing?

#artificialintelligence

The Facial Recognition Technology Is Known to Have Gained a Foothold in Many Industry Verticals and It Keeps on Continuously Charting New Ground. Facial Recognition has gained so much traction in an entire host of verticals and applications (according to Variant Market Research, its market is expected to be worth some $ 15.4 billion by 2024) that most anyone, regardless of the kind of business they are in, should look into whether the technology could come in handy in reaching their business objectives. In part, this is owing to the ability of the Facial Recognition technology to better equip and advance the field of expertise known as Marketing, - something universal and of the utmost importance to most industries. Moreover, Face Recognition can make a dent in precisely those areas of Marketing, in which the now rampant Digital Marketing falls short, or is, simply, irrelevant. What are those areas, how much headway has been made already and what are the potentialities one should be aware of?


Who Owns Your Face?

The Atlantic - Technology

Data brokers already buy and sell detailed profiles that describe who you are. They track your public records and your online behavior to figure out your age, your gender, your relationship status, your exact location, how much money you make, which supermarket you shop at, and on and on and on. It's entirely reasonable to wonder how companies are collecting and using images of you, too. Facebook already uses facial recognition software to tag individual people in photos. Apple's new app, Clips, recognizes individuals in the videos you take.


US officials train facial recognition tech with photos of dead people and immigrants, report claims

Daily Mail - Science & tech

A unit of the U.S. Department of Commerce has been using photos of immigrants, abused children and dead people to train their facial recognition systems, a worrying new report has detailed. The National Institute of Standards and Technology (NIST) oversees a database, called the Facial Recognition Verification Testing program, that'depends' on these types of controversial images, according to Slate. Scientists from Dartmouth College, the University of Washington and Arizona State University discovered the practice and laid out their findings in new research set to be reviewed for publication later this year. A unit of the U.S. Department of Commerce has been using photos of immigrants, abused children and dead people to train their facial recognition systems, a new report has detailed. The Facial Recognition Verification Testing program was first established in 2017 as a way for companies, academic researchers and designers to evaluate their facial recognition technologies.


AI claims to be able to thwart facial recognition software, making you "invisible"

#artificialintelligence

A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.


Facial recognition software is biased towards white men, researcher finds

#artificialintelligence

New research out of MIT's Media Lab is underscoring what other experts have reported or at least suspected before: facial recognition technology is subject to biases based on the data sets provided and the conditions in which algorithms are created. Joy Buolamwini, a researcher at the MIT Media Lab, recently built a dataset of 1,270 faces, using the faces of politicians, selected based on their country's rankings for gender parity (in other words, having a significant number of women in public office). Buolamwini then tested the accuracy of three facial recognition systems: those made by Microsoft, IBM, and Megvii of China. The results, which were originally reported in The New York Times, showed inaccuracies in gender identification dependent on a person's skin color. Gender was misidentified in less than one percent of lighter-skinned males; in up to seven percent of lighter-skinned females; up to 12 percent of darker-skinned males; and up to 35 percent in darker-skinner females.