Collaborating Authors

Face recognition experts perform better with AI as partner: Multidisciplinary study provides scientific underpinnings for accuracy of forensic facial identification


A study appearing today in the Proceedings of the National Academy of Sciences has brought answers. In work that combines forensic science with psychology and computer vision research, a team of scientists from the National Institute of Standards and Technology (NIST) and three universities has tested the accuracy of professional face identifiers, providing at least one revelation that surprised even the researchers: Trained human beings perform best with a computer as a partner, not another person. "This is the first study to measure face identification accuracy for professional forensic facial examiners, working under circumstances that apply in real-world casework," said NIST electronic engineer P. Jonathon Phillips. "Our deeper goal was to find better ways to increase the accuracy of forensic facial comparisons." The team's effort began in response to a 2009 report by the National Research Council, "Strengthening Forensic Science in the United States: A Path Forward," which underscored the need to measure the accuracy of forensic examiner decisions.

The FBI Says Its Photo Analysis is Scientific Evidence. Scientists Disagree.

Mother Jones

This story was originally published by ProPublica. At the FBI Laboratory in Quantico, Virginia, a team of about a half-dozen technicians analyzes pictures down to their pixels, trying to determine if the faces, hands, clothes or cars of suspects match images collected by investigators from cameras at crime scenes. The unit specializes in visual evidence and facial identification, and its examiners can aid investigations by making images sharper, revealing key details in a crime or ruling out potential suspects. But the work of image examiners has never had a strong scientific foundation, and the FBI's endorsement of the unit's findings as trial evidence troubles many experts and raises anew questions about the role of the FBI Laboratory as a standard-setter in forensic science. FBI examiners have tied defendants to crime pictures in thousands of cases over the past half-century using unproven techniques, at times giving jurors baseless statistics to say the risk of error was vanishingly small. Much of the legal foundation for the unit's work is rooted in a 22-year-old comparison of bluejeans. Studies on several photo comparison techniques, conducted over the last decade by the FBI and outside scientists, have found they are not reliable. Since those studies were published, there's no indication that lab officials have checked past casework for errors or inaccurate testimony. Image examiners continue to use disputed methods in an array of cases to bolster prosecutions against people accused of robberies, murder, sex crimes and terrorism. The work of image examiners is a type of pattern analysis, a category of forensic science that has repeatedly led to misidentifications at the FBI and other crime laboratories. Before the discovery of DNA identification methods in the 1980s, most of the bureau's lab worked in pattern matching, which involves comparing features from items of evidence to the suspect's body and belongings. Examiners had long testified in court that they could determine what fingertip left a print, what gun fired a bullet, which scalp grew a hair "to the exclusion of all others." Research and exonerations by DNA analysis have repeatedly disproved these claims, and the U.S. Department of Justice no longer allows technicians and scientists from the FBI and other agencies to make such unequivocal statements, according to new testimony guidelines released last year. Though image examiners rely on similarly flawed methods, they have continued to testify to and defend their exactitude, according to a review of court records and examiners' written reports and published articles.

Face Recognition Experts Perform Better with AI as Partners


Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely.

Four Principles of Explainable AI as Applied to Biometrics and Facial Forensic Algorithms Artificial Intelligence

Traditionally, researchers in automatic face recognition and biometric technologies have focused on developing accurate algorithms. With this technology being integrated into operational systems, engineers and scientists are being asked, do these systems meet societal norms? The origin of this line of inquiry is `trust' of artificial intelligence (AI) systems. In this paper, we concentrate on adapting explainable AI to face recognition and biometrics, and we present four principles of explainable AI to face recognition and biometrics. The principles are illustrated by $\it{four}$ case studies, which show the challenges and issues in developing algorithms that can produce explanations.

Researchers say fMRI scans now better than a polygraph in lie detection

Daily Mail - Science & tech

Sweaty palms and a racing heartbeat might help you to spot a liar, but the most tell-tale evidence lies in the brain. For the first time, researchers have conducted a controlled comparison of fMRI scans and polygraph testing in lie detection. The study revealed fMRI is a far more effective method, as it picks up on the activation of decision-making areas in the brain when a person tells a lie, allowing it to identify deception up to 90 percent of the time. FMRI picks up on the activation of decision-making areas in the brain when a person tells a lie (as shown above), allowing it to identify deception up to 90 percent of the time. Overall, the neuroscience experts without any prior experience in lie detection were 24 percent more likely to spot deception than the professional polygraph examiners.