Goto

Collaborating Authors

Interpretable Image Recognition with Hierarchical Prototypes

arXiv.org Machine Learning

Vision models are interpretable when they classify objects on the basis of features that a person can directly understand. Recently, methods relying on visual feature prototypes have been developed for this purpose. However, in contrast to how humans categorize objects, these approaches have not yet made use of any taxonomical organization of class labels. With such an approach, for instance, we may see why a chimpanzee is classified as a chimpanzee, but not why it was considered to be a primate or even an animal. In this work we introduce a model that uses hierarchically organized prototypes to classify objects at every level in a predefined taxonomy. Hence, we may find distinct explanations for the prediction an image receives at each level of the taxonomy. The hierarchical prototypes enable the model to perform another important task: interpretably classifying images from previously unseen classes at the level of the taxonomy to which they correctly relate, e.g. classifying a hand gun as a weapon, when the only weapons in the training data are rifles. With a subset of ImageNet, we test our model against its counterpart black-box model on two tasks: 1) classification of data from familiar classes, and 2) classification of data from previously unseen classes at the appropriate level in the taxonomy. We find that our model performs approximately as well as its counterpart black-box model while allowing for each classification to be interpreted.


Artificial Intelligence and Machine Learning in Medical Imaging

#artificialintelligence

The two major tasks in medical imaging that appear to be naturally predestined to be solved with AI algorithms are segmentation and classification. Most of techniques used in medical imaging were conventional image processing, or more widely formulated computer vision algorithms. One can find many works with artificial neural networks, the backbone of deep learning. However, most works were focused on conventional computer vision which focused, and still does, on "handcrafted" features, techniques that were the results of manual design to extract useful and differentiating information from medical images. Some progress was visible in the late 90s and early 2000s (for instance, the SIFT method in 1999, or visual dictionaries in early 2000s) but there were no breakthroughs.


Artificial Intelligence and Machine Learning in Medical Imaging

#artificialintelligence

The two major tasks in medical imaging that appear to be naturally predestined to be solved with AI algorithms are segmentation and classification. Most of techniques used in medical imaging were conventional image processing, or more widely formulated computer vision algorithms. One can find many works with artificial neural networks, the backbone of deep learning. However, most works were focused on conventional computer vision which focused, and still does, on "handcrafted" features, techniques that were the results of manual design to extract useful and differentiating information from medical images. Some progress was visible in the late 90s and early 2000s (for instance, the SIFT method in 1999, or visual dictionaries in early 2000s) but there were no breakthroughs.


IBM created software using NYPD images that can search for people by SKIN COLOR, report claims

Daily Mail - Science & tech

From 2012 to 2016, the New York City Police Department supplied IBM with thousands of surveillance images of unaware New Yorkers for the development of software that could help track down people'of interest,' a shocking report claims. IBM's technology was designed to match stills of individuals with specific physical characteristics, including clothing color, age, gender, hair color, and even skin tone, according to The Intercept. Internal documents and sources involved with the program cited by the report reveal IBM released an early iteration of its video analytics software by 2013, before improving its capabilities over the following years. The report adds to growing concerns on the potential for racial profiling with advanced surveillance technology. From 2012 to 2016, the New York City Police Department supplied IBM with thousands of surveillance images of unaware New Yorkers for the development of software that could help track down people'of interest,' a shocking report claims According to the investigation by The Intercept and the Investigative Fund, the NYPD did not end up using IBM's analytics program as part of its larger surveillance system, and discontinued it by 2016.


Google could soon 'see' like humans with its image recognition program

AITopics Original Links

As humans, we can distinguish between different objects easily - such as dogs wearing hats, or between oranges and bananas in a bag - but for computers this has been typically much more difficult. A team of Google researchers has developed an advanced image classification and detection algorithm called GoogLeNet, which is twice as effective than previous programs. It is so accurate it can locate and distinguish between a range of object sizes within a single image, and it can also determine an object within, or on top of, an object, within the photo. A team of California-based Google researchers developed GoogLeNet, that uses an advanced classification and detection algorithm to identify object. The software recently placed first in the ImageNet large-scale visual recognition challenge (ILSVRC).