Face Recognition
Railway firms diversify ticket gate access methods
Railway companies in Japan are increasingly expanding fare payment options, offering passengers more diverse ways to pass through station ticket gates. In addition to prepaid transportation integrated circuit (IC) cards, such as Suica and Pasmo, a growing number of railway operators are introducing contactless credit card payments. Some routes have also started adopting facial recognition systems for gate access. With the number of overseas visitors on the rise, railway companies are seeking to enhance passenger convenience through more accessible services.
ICE Rolls Facial Recognition Tools Out to Officers' Phones
WIRED published a shocking investigation this week based on records, including audio recordings, of hundreds of emergency calls from United States Immigration and Customs Enforcement (ICE) detention centers. The calls--which include reports of incidents of staff sexual assaults, suicide attempts, and head injuries--indicate a system inundated by life-threatening incidents, delayed treatment, and overcrowding. In a 6-3 decision on Friday, the US Supreme Court upheld a Texas porn ID law, finding that age verification for explicit sites is constitutional. In a dissent, Justice Elena Kagan warned that this determination ignores First Amendment precedent and will have privacy implications for adults. Looking at the US bombing of Iranian nuclear sites last weekend, President Donald Trump posted initial announcements of the strikes on the social Network Truth Social, which then began suffering intermittent outages.
Supreme Court upholds Texas age-verification law
Today, the Supreme Court has decided to upload Texas's age-verification law for porn sites. The decision is 6-3, with Justices Kagan, Sotomayor, and Jackson dissenting. Around a third of states in the U.S. have enacted such laws. They typically require sites with more than a third of explicit content to require viewers to submit some verification of age, such as a facial recognition scan or a government ID. In January, SCOTUS heard a case about the constitutionality of Texas's law in particular, in a case called Free Speech Coalition v. Paxton.
Facial recognition error sees woman accused of theft
In one email from Facewatch seen by the BBC, the firm told Ms Horan it "relies on information submitted by stores" and the Home Bargains branches involved had since been "suspended from using the Facewatch system". Madeleine Stone, senior advocacy officer at the civil liberties campaign group Big Brother Watch, said they had been contacted by more than 35 people who have complained of being wrongly placed on facial recognition watchlists. "They're being wrongly flagged as criminals," Ms Stone said. "They've given no due process, kicked out of stores. This is having a really serious impact."
This Glitchy, Error-Prone Tool Could Get You Deported--Even If You're a U.S. Citizen
Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Juan Carlos Lopez-Gomez, despite his U.S. citizenship and Social Security card, was arrested on April 16 on an unfounded suspicion of him being an "unauthorized alien." Immigration and Customs Enforcement kept him in county jail for 30 hours "based on biometric confirmation of his identity"--an obvious mistake of facial recognition technology. Another U.S. citizen, Jensy Machado, was held at gunpoint and handcuffed by ICE agents. He was another victim of mistaken identity after someone else gave his home address on a deportation order.
Decoupling "when to update" from "how to update"
A useful approach to obtain data is to be creative and mine data from various sources, that were created for different purposes. Unfortunately, this approach often leads to noisy labels. In this paper, we propose a meta algorithm for tackling the noisy labels problem. The key idea is to decouple when to update'' fromhow to update''. We demonstrate the effectiveness of our algorithm by mining data for gender classification by combining the Labeled Faces in the Wild (LFW) face recognition dataset with a textual genderizing service, which leads to a noisy dataset.
Learning a Metric Embedding for Face Recognition using the Multibatch Method
This work is motivated by the engineering task of achieving a near state-of-the-art face recognition on a minimal computing budget running on an embedded system. Our main technical contribution centers around a novel training method, called Multibatch, for similarity learning, i.e., for the task of generating an invariant face signature'' through training pairs of same'' and not-same'' face images. The Multibatch method first generates signatures for a mini-batch of k face images and then constructs an unbiased estimate of the full gradient by relying on all k 2-k pairs from the mini-batch. We prove that the variance of the Multibatch estimator is bounded by O(1/k 2), under some mild conditions. In contrast, the standard gradient estimator that relies on random k/2 pairs has a variance of order 1/k .
Incremental Boosting Convolutional Neural Network for Facial Action Unit Recognition
Recognizing facial action units (AUs) from spontaneous facial expressions is still a challenging problem. Most recently, CNNs have shown promise on facial AU recognition. However, the learned CNNs are often overfitted and do not generalize well to unseen subjects due to limited AU-coded training images. We proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into the CNN via an incremental boosting layer that selects discriminative neurons from the lower layer and is incrementally updated on successive mini-batches. In addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition.
SpeechForensics: Audio-Visual Speech Representation Learning for Face Forgery Detection 1,2 Gang Li
Detection of face forgery videos remains a formidable challenge in the field of digital forensics, especially the generalization to unseen datasets and common perturbations. In this paper, we tackle this issue by leveraging the synergy between audio and visual speech elements, embarking on a novel approach through audiovisual speech representation learning. Our work is motivated by the finding that audio signals, enriched with speech content, can provide precise information effectively reflecting facial movements. To this end, we first learn precise audio-visual speech representations on real videos via a self-supervised masked prediction task, which encodes both local and global semantic information simultaneously. Then, the derived model is directly transferred to the forgery detection task. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods in terms of cross-dataset generalization and robustness, without the participation of any fake video in model training. The code is available here.
UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face Recognition
Sample-to-class-based face recognition models can not fully explore the crosssample relationship among large amounts of facial images, while sample-to-samplebased models require sophisticated pairing processes for training. Furthermore, neither method satisfies the requirements of real-world face verification applications, which expect a unified threshold separating positive from negative facial pairs. In this paper, we propose a unified threshold integrated sample-to-sample based loss (USS loss), which features an explicit unified threshold for distinguishing positive from negative pairs. Inspired by our USS loss, we also derive the sample-to-sample based softmax and BCE losses, and discuss their relationship. Extensive evaluation on multiple benchmark datasets, including MFR, IJB-C, LFW, CFP-FP, AgeDB, and MegaFace, demonstrates that the proposed USS loss is highly efficient and can work seamlessly with sample-to-class-based losses. The embedded loss (USS and sample-to-class Softmax loss) overcomes the pitfalls of previous approaches and the trained facial model UniTSFace exhibits exceptional performance, outperforming state-of-the-art methods, such as CosFace, ArcFace, VPL, AnchorFace, and UNPG.