recognizability
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- North America > United States > Massachusetts (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Predicts HumanVisualSelectivity
The 1For our experiments we are counting the number of AMTHuman Intelligence Tasks (HITs) that were completed. Wedid not exclude AMT workers from completing multiple HITs. The authors posit that this noisiness is because the gradient may fluctuate sharply at small scales, which seems plausible especially given that, duetoReLUactivationfunctions, theoutput generally isnotevencontinuously differentiable. ThisCAM indicates the discriminative regions of the image used by the CNN to identify that class. We used each of the above passive attention methods to acquire attention maps from each of the modelsinthetoppartofTable2.
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
Diversity vs. Recognizability: Human-like generalization in one-shot generative models
Robust generalization to new concepts has long remained a distinctive feature of human intelligence. However, recent progress in deep generative models has now led to neural architectures capable of synthesizing novel instances of unknown visual concepts from a single training example. Yet, a more precise comparison between these models and humans is not possible because existing performance metrics for generative models (i.e., FID, IS, likelihood) are not appropriate for the one-shot generation scenario. Here, we propose a new framework to evaluate one-shot generative models along two axes: sample recognizability vs. diversity (i.e., intra-class variability). Using this framework, we perform a systematic evaluation of representative one-shot generative models on the Omniglot handwritten dataset. We first show that GAN-like and VAE-like models fall on opposite ends of the diversity-recognizability space. Extensive analyses of the effect of key model parameters further revealed that spatial attention and context integration have a linear contribution to the diversity-recognizability trade-off. In contrast, disentanglement transports the model along a parabolic curve that could be used to maximize recognizability. Using the diversity-recognizability framework, we were able to identify models and parameters that closely approximate human data.
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- North America > United States > Massachusetts (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
TransFIRA: Transfer Learning for Face Image Recognizability Assessment
Tu, Allen, Narayan, Kartik, Gleason, Joshua, Xu, Jennifer, Meyn, Matthew, Goldstein, Tom, Patel, Vishal M.
Face recognition in unconstrained environments such as surveillance, video, and web imagery must contend with extreme variation in pose, blur, illumination, and occlusion, where conventional visual quality metrics fail to predict whether inputs are truly recognizable to the deployed encoder. Existing FIQA methods typically rely on visual heuristics, curated annotations, or computationally intensive generative pipelines, leaving their predictions detached from the encoder's decision geometry. We introduce TransFIRA (Transfer Learning for Face Image Recognizability Assessment), a lightweight and annotation-free framework that grounds recognizability directly in embedding space. TransFIRA delivers three advances: (i) a definition of recognizability via class-center similarity (CCS) and class-center angular separation (CCAS), yielding the first natural, decision-boundary--aligned criterion for filtering and weighting; (ii) a recognizability-informed aggregation strategy that achieves state-of-the-art verification accuracy on BRIAR and IJB-C while nearly doubling correlation with true recognizability, all without external labels, heuristics, or backbone-specific training; and (iii) new extensions beyond faces, including encoder-grounded explainability that reveals how degradations and subject-specific factors affect recognizability, and the first recognizability-aware body recognition assessment. Experiments confirm state-of-the-art results on faces, strong performance on body recognition, and robustness under cross-dataset shifts. Together, these contributions establish TransFIRA as a unified, geometry-driven framework for recognizability assessment -- encoder-specific, accurate, interpretable, and extensible across modalities -- significantly advancing FIQA in accuracy, explainability, and scope.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- North America > United States > New York (0.04)
- Asia > Middle East > Republic of Türkiye > Batman Province > Batman (0.04)
- Research Report > New Finding (0.70)
- Research Report > Experimental Study (0.48)