Face Recognition

Inside the urgent battle to stop UK police using facial recognition


The last day of January 2019 was sunny, yet bitterly cold in Romford, east London. Shoppers scurrying from retailer to retailer wrapped themselves in winter coats, scarves and hats. The temperature never rose above three degrees Celsius. For police officers positioned next to an inconspicuous blue van, just metres from Romford's Overground station, one man stood out among the thin winter crowds. The man, wearing a beige jacket and blue cap, had pulled his jacket over his face as he moved in the direction of the police officers.

Bias-Resilient Neural Network


Presence of bias and confounding effects is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in the recent years. Such challenges range from spurious associations of confounding variables in medical studies to the bias of race in gender or face recognition systems. One solution is to enhance datasets and organize them such that they do not reflect biases, which is a cumbersome and intensive task. The alternative is to make use of available data and build models considering these biases. Traditional statistical methods apply straightforward techniques such as residualization or stratification to precomputed features to account for confounding variables.

Racial Flaws in Facial Recognition Tools Show Up Again in ACLU Study


Another study is saying facial recognition software is proving unreliable when dealing with people of color. The American Civil Liberties Union of California recently released their study of Amazon's Rekognition software marketed to law enforcement authorities.

Facial recognition only works if you are cisgender and white, study finds

Daily Mail - Science & tech

Humans aren't always great at identifying a person's gender based on visual cues, and a new study suggests that computers may be even worse at it than humans. The researchers found the systems misclassified trans men up to 38 percent of the time and had no options for nonbinary people, meaning they were misclassified 100 percent of the time by default. 'These systems don't know any other language but male or female, so for many gender identities it is not possible for them to be correct,' researcher Jed Brubaker told CU Boulder Today. The facial recognition software was much more accurate when evaluating cisgender, accurately identifying cisgender women 98.3 percent of the time and cisgender men 97.6 percent of the time. The study was based on 2,450 images of faces collected from Instragram, each of which had a self-appointed gender identity indicated by the poster in the form of a hashtag.

Chooch Facial Recognition bad? Facial authentication better? Facial authorization best?


Facial recognition is currently enjoying a very bad name for fear of a surveillance state. Mass facial recognition indeed means governments can potentially know where everyone is, all the time. Facial recognition answers the question "who are you?" by comparing your biometrical facial features with a neural network, potentially created by a machine learning algorithm that have created hashes from every face on Earth. Facebook has most of our faces on file. There is, however, a flipside, the answer to the following questions, "Are you who you say you are?" or "Are you allowed through this door?" or "Should I let you transfer one million bitcoin from one account to another?" or "Must I give you access this top secret data?" or "Is it you who is really signing this document?"

Facial recognition AI can't identify trans and non-binary people


Facial-recognition software from major tech companies is apparently ill-equipped to work on transgender and non-binary people, according to new research. A recent study by computer-science researchers at the University of Colorado Boulder found that major AI-based facial analysis tools--including Amazon's Rekognition, IBM's Watson, Microsoft's Azure, and Clarifai--habitually misidentified non-cisgender people. They eliminated instances in which multiple individuals were in the photo, or where at least 75% of the person's face wasn't visible. The images were then divided by hashtag, amounting to 350 images in each group. Scientists then tested each group against the facial analysis tools of the four companies.

Namaste, says India's first lip-syncing robot Mumbai News - Times of India


Namaste, says India's first lip-syncing robot Also the world's first Hindi-speaking robot, Rashmi addressed an excited audience of 15,000 people with a warm "Namaste". Apart from Hindi, Rashmi also speaks English, Marathi and Bhojpuri. The humanoid uses Artificial Intelligence, Linguistic Interpretation, Visual Data and Face Recognition systems to converse like a human being. Rashmi was developed at a cost of Rs 5 lakh. She said Isro had inspected her, and planned to send her to Mars in 2022.

Huawei surveillance: Chinese snooping tech seen spreading to nations vulnerable to abuse, keeping tabs on trouble-makers

The Japan Times

BELGRADE – When hundreds of video cameras with the power to identify and track individuals started appearing in the streets of Belgrade as part of a major surveillance project, some protesters began having second thoughts about joining anti-government demonstrations in the Serbian capital. Local authorities assert the system, created by Chinese telecommunications company Huawei, helps reduce crime in the city of 2 million. Critics contend it erodes personal freedoms, makes political opponents vulnerable to retribution and even exposes the country's citizens to snooping by the Chinese government. The cameras, equipped with facial recognition technology, are being rolled out across hundreds of cities around the world, particularly in poorer countries with weak track records on human rights where Beijing has increased its influence through big business deals. With the United States claiming that Chinese state authorities can get backdoor access to Huawei data, the aggressive rollout is raising concerns about the privacy of millions of people in countries with little power to stand up to China.

How Can Fintechs Onboard New Customers While Preventing Fraud


Financial technology (Fintech) companies are finding new ways to meet consumer demands and create more financial inclusion on a global scale. While Fintechs are on the rise, these companies still have to manage the same problems traditional financial institutions face: fraud. And while fraud permeates throughout nearly all aspects of a financial transaction, one particular area of concern is onboarding. Client onboarding is when a new client begins their relationship with the fintech. Companies naturally want to make this process easy and simplified, but in the financial world, this can be complicated.

These clothes use outlandish designs to trick facial recognition software into thinking you're not a human


Facial recognition technology is everywhere, and only becoming more pervasive. It's marketed as a security feature by companies like Apple and Google to prevent strangers from unlocking your iPhone or front door. It's also used by government agencies like police departments. More than half of adult Americans' faces are logged in police databases, according to a study by Georgetown researchers. Facial recognition technology is used by governments across the globe to identify and track dissidents, and has been deployed by police against Hong Kong protesters.