Goto

Collaborating Authors

facial analysis


How artificial intelligence can help detect rare diseases: Researchers show that using portrait photos in combination with genetic and patient data improves diagnoses

#artificialintelligence

Many patients with rare diseases go through lengthy trials and tribulations until they are correctly diagnosed. "This results in a loss of valuable time that is actually needed for early therapy in order to avert progressive damage," explains Prof. Dr. med. Together with an international team of researchers, he demonstrates how artificial intelligence can be used to make comparatively quick and reliable diagnoses in facial analysis. The researchers used data of 679 patients with 105 different diseases caused by the change in a single gene. These include, for example, mucopolysaccharidosis (MPS), which leads to bone deformation, learning difficulties and stunted growth.


The Quiet Growth of Race-Detection Software Sparks Concerns Over Bias

WSJ.com: WSJD - Technology

In the last few years, companies have started using such race-detection software to understand how certain customers use their products, who looks at their ads, or what people of different racial groups like. Others use the tool to seek different racial features in stock photography collections, typically for ads, or in security, to help narrow down the search for someone in a database. In China, where face tracking is widespread, surveillance cameras have been equipped with race-scanning software to track ethnic minorities. The field is still developing, and it is an open question how companies, governments and individuals will take advantage of such technology in the future. Use of the software is fraught, as researchers and companies have begun to recognize its potential to drive discrimination, posing challenges to widespread adoption.


Amazon Rekognition - How to guide for Images - The Last Dev

#artificialintelligence

In today's post, we are going to take a look at another AI service of AWS, Amazon Rekognition. We focus on the image for object and scene detection, and we learn how to use the service programmatically. Furthermore, you can also check out one of my previous posts about another AI Service, Amazon Kendra. Kendra is a service that lets you build your search engine. You can find the code for this post here.


An Algorithm That 'Predicts' Criminality Based on a Face Sparks a Furor

#artificialintelligence

In early May, a press release from Harrisburg University claimed that two professors and a graduate student had developed a facial-recognition program that could predict whether someone would be a criminal. The release said the paper would be published in a collection by Springer Nature, a big academic publisher. With "80 percent accuracy and with no racial bias," the paper, A Deep Neural Network Model to Predict Criminality Using Image Processing, claimed its algorithm could predict "if someone is a criminal based solely on a picture of their face." The press release has since been deleted from the university website. Tuesday, more than 1,000 machine-learning researchers, sociologists, historians, and ethicists released a public letter condemning the paper, and Springer Nature confirmed on Twitter it will not publish the research.


An Algorithm That 'Predicts' Criminality Based on a Face Sparks a Furor

WIRED

In early May, a press release from Harrisburg University claimed that two professors and a graduate student had developed a facial-recognition program that could predict whether someone would be a criminal. The release said the paper would be published in a collection by Springer Nature, a big academic publisher. With "80 percent accuracy and with no racial bias," the paper, A Deep Neural Network Model to Predict Criminality Using Image Processing, claimed its algorithm could predict "if someone is a criminal based solely on a picture of their face." The press release has since been deleted from the university website. Tuesday, more than 1,000 machine-learning researchers, sociologists, historians, and ethicists released a public letter condemning the paper, and Springer Nature confirmed on Twitter it will not publish the research.


Is Artificial Intelligence Racial Bias Being Suppressed? - ReadWrite

#artificialintelligence

Artificial Intelligence (AI) and Machine Learning are used to power a variety of important modern software technologies. AI also powers the facial recognition software commonly used by law enforcement, landlords, and private citizens. Of all the uses for AI-powered software, facial recognition is a big deal. Security teams from large buildings that rely on video surveillance – like schools and airports – can benefit greatly from this technology. An AI algorithm has the potential to detect a known criminal or an unauthorized person on the property.


Is Artificial Intelligence Racial Bias Being Suppressed? - ReadWrite

#artificialintelligence

Artificial Intelligence (AI) and Machine Learning are used to power a variety of important modern software technologies. AI also powers the facial recognition software commonly used by law enforcement, landlords, and private citizens. Of all the uses for AI-powered software, facial recognition is a big deal. Security teams from large buildings that rely on video surveillance – like schools and airports – can benefit greatly from this technology. An AI algorithm has the potential to detect a known criminal or an unauthorized person on the property.


Highlights: Addressing fairness in the context of artificial intelligence

#artificialintelligence

When society uses artificial intelligence (AI) to help build judgments about individuals, fairness and equity are critical considerations. On Nov. 12, Brookings Fellow Nicol Turner-Lee sat down with Solon Barocas of Cornell University, Natasha Duarte of the Center for Democracy & Technology, and Karl Ricanek of the University of North Carolina Wilmington to discuss artificial intelligence in the context of societal bias, technological testing, and the legal system. Artificial intelligence is an element of many everyday services and applications, including electronic devices, online search engines, and social media platforms. In most cases, AI provides positive utility for consumers--such as when machines automatically detect credit card fraud or help doctors assess health care risks. However, there is a smaller percentage of cases, such as when AI helps inform decisions on credit limits or mortgage lending, where technology has a higher potential to augment historical biases.


Towards a General Model of Knowledge for Facial Analysis by Multi-Source Transfer Learning

arXiv.org Machine Learning

This paper proposes a step toward obtaining general models of knowledge for facial analysis, by addressing the question of multi-source transfer learning. More precisely, the proposed approach consists in two successive training steps: the first one consists in applying a combination operator to define a common embedding for the multiple sources materialized by different existing trained models. The proposed operator relies on an auto-encoder, trained on a large dataset, efficient both in terms of compression ratio and transfer learning performance. In a second step we exploit a distillation approach to obtain a lightweight student model mimicking the collection of the fused existing models. This model outperforms its teacher on novel tasks, achieving results on par with state-of-the-art methods on 15 facial analysis tasks (and domains), at an affordable training cost. Moreover, this student has 75 times less parameters than the original teacher and can be applied to a variety of novel face-related tasks.


AI "emotion recognition" can't be trusted

#artificialintelligence

As artificial intelligence is used to make more decisions about our lives, engineers have sought out ways to make it more emotionally intelligent. That means automating some of the emotional tasks that come naturally to humans -- most notably, looking at a person's face and knowing how they feel. To achieve this, tech companies like Microsoft, IBM, and Amazon all sell what they call "emotion recognition" algorithms, which infer how people feel based on facial analysis. For example, if someone has a furrowed brow and pursed lips, it means they're angry. If their eyes are wide, their eyebrows are raised, and their mouth is stretched, it means they're afraid, and so on.