Goto

Collaborating Authors

Image Understanding


CS194-26/294-26 Intro to Computer Vision and Computational Photography

#artificialintelligence

COURSE DESCRIPTION: The aim of this advanced undergraduate course is to introduce students to computing with visual data (images and video). We will cover acquisition, representation, and manipulation of visual information from digital photographs (image processing), image analysis and visual understanding (computer vision), and image synthesis (computational photography). Key algorithms will be presented, ranging from classical (e.g. ConvNets, GANs), with an emphasis on using these techniques to build practical systems. This hands-on emphasis will be reflected in the programming assignments, in which students will have the opportunity to acquire their own images and develop, largely from scratch, the image analysis and synthesis tools for solving applications.


Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification

arXiv.org Machine Learning

Deep AUC Maximization (DAM) is a paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. Most previous works of AUC maximization focus on the perspective of optimization by designing efficient stochastic algorithms, and studies on generalization performance of DAM on difficult tasks are missing. In this work, we aim to make DAM more practical for interesting real-world applications (e.g., medical image classification). First, we propose a new margin-based surrogate loss function for the AUC score (named as the AUC margin loss). It is more robust than the commonly used AUC square loss, while enjoying the same advantage in terms of large-scale stochastic optimization. Second, we conduct empirical studies of our DAM method on difficult medical image classification tasks, namely classification of chest x-ray images for identifying many threatening diseases and classification of images of skin lesions for identifying melanoma. Our DAM method has achieved great success on these difficult tasks, i.e., the 1st place on Stanford CheXpert competition (by the paper submission date) and Top 1% rank (rank 33 out of 3314 teams) on Kaggle 2020 Melanoma classification competition. We also conduct extensive ablation studies to demonstrate the advantages of the new AUC margin loss over the AUC square loss on benchmark datasets. To the best of our knowledge, this is the first work that makes DAM succeed on large-scale medical image datasets.


General Multi-label Image Classification with Transformers

arXiv.org Artificial Intelligence

Multi-label image classification is the task of predicting a set of labels corresponding to objects, attributes or other entities present in an image. In this work we propose the Classification Transformer (C-Tran), a general framework for multi-label image classification that leverages Transformers to exploit the complex dependencies among visual features and labels. Our approach consists of a Transformer encoder trained to predict a set of target labels given an input set of masked labels, and visual features from a convolutional neural network. A key ingredient of our method is a label mask training objective that uses a ternary encoding scheme to represent the state of the labels as positive, negative, or unknown during training. Our model shows state-of-the-art performance on challenging datasets such as COCO and Visual Genome. Moreover, because our model explicitly represents the uncertainty of labels during training, it is more general by allowing us to produce improved results for images with partial or extra label annotations during inference. We demonstrate this additional capability in the COCO, Visual Genome, News500, and CUB image datasets.


Cancer image classification based on DenseNet model

arXiv.org Machine Learning

Computer-aided diagnosis establishes methods for robust assessment of medical image-based examination. Image processing introduced a promising strategy to facilitate disease classification and detection while diminishing unnecessary expenses. In this paper, we propose a novel metastatic cancer image classification model based on DenseNet Block, which can effectively identify metastatic cancer in small image patches taken from larger digital pathology scans. We evaluate the proposed approach to the slightly modified version of the PatchCamelyon (PCam) benchmark dataset. The dataset is the slightly modified version of the PatchCamelyon (PCam) benchmark dataset provided by Kaggle competition, which packs the clinically-relevant task of metastasis detection into a straight-forward binary image classification task. The experiments indicated that our model outperformed other classical methods like Resnet34, Vgg19. Moreover, we also conducted data augmentation experiment and study the relationship between Batches processed and loss value during the training and validation process.


Building a Real-World Pipeline for Image Classification -- Part I

#artificialintelligence

One of the most important things of a classified website is its images. Most-likely, they are part of your landing page, where users spend most of their time on. UX is one of our corner stones at heycar. Therefore, we look forward to the best possible experience for our users. As mentioned on a previous article, at heycar we are hard bound to the market that we're included.


Learning Data Augmentation with Online Bilevel Optimization for Image Classification

arXiv.org Machine Learning

Data augmentation is a key practice in machine learning for improving generalization performance. However, finding the best data augmentation hyperparameters requires domain knowledge or a computationally demanding search. We address this issue by proposing an efficient approach to automatically train a network that learns an effective distribution of transformations to improve its generalization. Using bilevel optimization, we directly optimize the data augmentation parameters using a validation set. This framework can be used as a general solution to learn the optimal data augmentation jointly with an end task model like a classifier. Results show that our joint training method produces an image classification accuracy that is comparable to or better than carefully hand-crafted data augmentation. Yet, it does not need an expensive external validation loop on the data augmentation hyperparameters.


MACE: Model Agnostic Concept Extractor for Explaining Image Classification Networks

arXiv.org Artificial Intelligence

Deep convolutional networks have been quite successful at various image classification tasks. The current methods to explain the predictions of a pre-trained model rely on gradient information, often resulting in saliency maps that focus on the foreground object as a whole. However, humans typically reason by dissecting an image and pointing out the presence of smaller concepts. The final output is often an aggregation of the presence or absence of these smaller concepts. In this work, we propose MACE: a Model Agnostic Concept Extractor, which can explain the working of a convolutional network through smaller concepts. The MACE framework dissects the feature maps generated by a convolution network for an image to extract concept based prototypical explanations. Further, it estimates the relevance of the extracted concepts to the pre-trained model's predictions, a critical aspect required for explaining the individual class predictions, missing in existing approaches. We validate our framework using VGG16 and ResNet50 CNN architectures, and on datasets like Animals With Attributes 2 (AWA2) and Places365. Our experiments demonstrate that the concepts extracted by the MACE framework increase the human interpretability of the explanations, and are faithful to the underlying pre-trained black-box model.


Building an End to End Image Classification/Recognition Application

#artificialintelligence

This article was published as a part of the Data Science Blogathon. In the recent years, face recognition applications have been developed on a much larger scale. Image classification and recognition has evolved and is being used at a number of places. I recently read an article where a face recognition application has been deployed at one of the airports for a completely automated check in process. This will alleviate the need for manual intervention and provide a seamless end to end check in process via technology.


Using TensorFlow and Azure to Add Image Classification to Your Android Apps

#artificialintelligence

TensorFlow is a well established, open source machine learning and deep learning framework that can be used to create and run a wide range of different models, usually using powerful machines in the cloud. In addition, TensorFlow also supports running models on mobile devices through the TensorFlow.Mobile library, taking advantage of the hardware acceleration available on modern phones to run models incredibly fast on low powered mobile devices. In this post, we'll discuss how to build an image classifier to identify different fruits using the Azure Custom Vision service, and create a simple Android app to use this model to classify images. Creating ML models can be time consuming and require large data sets. To make it easier to create image classification models, Microsoft has created the Custom Vision Service, which uses a technique called transfer learning to allow you to train an image classifier using only a small number of images, instead of the thousands that traditionally would be required to train such a model.


Teledyne Imaging's New Z-trak2 Family of 3d Profile Sensors Delivers up to 45k Profiles/Sec

#artificialintelligence

Teledyne Imaging, a Teledyne Technologies company and global leader in machine vision technology, launches Z-Trak2, its newest family of 3D profile sensors. Built on Teledyne Imaging's 3D image sensor technology, Z-Trak2 ushers in a new era of 5GigE 3D profile sensors for high-speed, in-line 3D applications. Models deliver scan speeds of up to 45,000 profiles per second and features built-in HDR and reflection compensation algorithms to handle surfaces with varying degrees of reflectivity in a single scan. This delivers in-line height measurements for inspection, detection, identification, and guidance in electronics, semiconductor, automotive and factory automation markets segments. Offering 2,000 points per profile, all Z-Trak2 models are factory calibrated and offered with either blue or red eye safe lasers to suit various surface properties and operating environments.