imagenet image
- Oceania > Australia (0.04)
- North America > Canada > Newfoundland and Labrador > Newfoundland (0.04)
- Europe > Poland (0.04)
- (2 more...)
- Oceania > Australia (0.04)
- North America > Canada > Newfoundland and Labrador > Newfoundland (0.04)
- Europe > Poland (0.04)
- (2 more...)
), a principled evaluation and
Thanks for this excellent suggestion which makes the paper stronger! We now clarify the difference between "high-level strategy" and "decision rule" (for We now discuss this point in the main paper and show the simulations in the appendix. R3: A closer analysis of error differences would be helpful. Top row: "Hard" images for CNNs (correctly classified by all humans but not by any SF.8, SF.9); now linked & discussed more prominently.
Internal Representations of Vision Models Through the Lens of Frames on Data Manifolds
Kvinge, Henry, Jorgenson, Grayson, Brown, Davis, Godfrey, Charles, Emerson, Tegan
While the last five years have seen considerable progress in understanding the internal representations of deep learning models, many questions remain. This is especially true when trying to understand the impact of model design choices, such as model architecture or training algorithm, on hidden representation geometry and dynamics. In this work we present a new approach to studying such representations inspired by the idea of a frame on the tangent bundle of a manifold. Our construction, which we call a neural frame, is formed by assembling a set of vectors representing specific types of perturbations of a data point, for example infinitesimal augmentations, noise perturbations, or perturbations produced by a generative model, and studying how these change as they pass through a network. Using neural frames, we make observations about the way that models process, layer-by-layer, specific modes of variation within a small neighborhood of a datapoint. Our results provide new perspectives on a number of phenomena, such as the manner in which training with augmentation produces model invariance or the proposed trade-off between adversarial training and model generalization.
- North America > Mexico > Gulf of Mexico (0.14)
- Europe > United Kingdom > North Sea > Southern North Sea (0.04)
- North America > United States > Texas > El Paso County > El Paso (0.04)
- (3 more...)
- Energy (0.46)
- Government > Regional Government (0.46)
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Nguyen, Giang, Kim, Daeyoung, Nguyen, Anh
Explaining the decisions of an Artificial Intelligence (AI) model is increasingly critical in many real-world, high-stake applications. Hundreds of papers have either proposed new feature attribution methods, discussed or harnessed these tools in their work. However, despite humans being the target end-users, most attribution methods were only evaluated on proxy automatic-evaluation metrics [52, 66, 68]. In this paper, we conduct the first, large-scale user study on 320 lay and 11 expert users to shed light on the effectiveness of state-of-the-art attribution methods in assisting humans in ImageNet classification, Stanford Dogs fine-grained classification, and these two tasks but when the input image contains adversarial perturbations. We found that, in overall, feature attribution is surprisingly not more effective than showing humans nearest training-set examples. On a hard task of fine-grained dog categorization, presenting attribution maps to humans does not help, but instead hurts the performance of human-AI teams compared to AI alone. Importantly, we found automatic attribution-map evaluation measures to correlate poorly with the actual human-AI team performance. Our findings encourage the community to rigorously test their methods on the downstream human-in-the-loop applications and to rethink the existing evaluation metrics.
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.48)
- Law (0.93)
- Health & Medicine > Therapeutic Area (0.68)
- Health & Medicine > Diagnostic Medicine (0.46)
- Government > Regional Government (0.46)
HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models
Townsend, James, Bird, Thomas, Kunze, Julius, Barber, David
We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based 'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models (0.83)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.66)
Building a Computer Vision Model: Approaches and datasets - KDnuggets
Computer vision is one of the hottest subfields of machine learning, given its wide variety of applications and tremendous potential. Its goal: to replicate the powerful capacities of human vision. But how is this achieved with algorithms? Let's have a loot at the most important datasets and approaches. Computer vision algorithms are no magic.