Goto

Collaborating Authors

Results


Image Classification in Machine Learning [Intro + Tutorial]

#artificialintelligence

Image Classification is one of the most fundamental tasks in computer vision. It has revolutionized and propelled technological advancements in the most prominent fields, including the automobile industry, healthcare, manufacturing, and more. How does Image Classification work, and what are its benefits and limitations? Keep reading, and in the next few minutes, you'll learn the following: Image Classification (often referred to as Image Recognition) is the task of associating one (single-label classification) or more (multi-label classification) labels to a given image. Here's how it looks like in practice when classifying different birds-- images are tagged using V7. Image Classification is a solid task to benchmark modern architectures and methodologies in the domain of computer vision. Now let's briefly discuss two types of Image Classification, depending on the complexity of the classification task at hand. Single-label classification is the most common classification task in supervised Image Classification.


Are You Sure That You Can Implement Image Classification Networks?

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. It's free, we don't spam, and we never share your email address.


Satellite Image Classification With Deep Learning

#artificialintelligence

In this tutorial, you'll see how to build a satellite image classifier using Python and Tensorflow. Satellite image classification is an important task when it comes down to agriculture, crop/forest monitoring, or even in urban scenarios, with planning tasks. We're going to use the EuroSAT dataset, which consists of Sentinel-2 satellite images covering…


Neural Style Transfer Tutorial

#artificialintelligence

Neural Style Transfer is a technique that applies the Style of 1 image to the content of another image. It's a generative algorithm meaning that the algorithm generates an image as the output. As you're probably wondering, how does it work? In this post, we'll be explaining how the vanilla Neural Style Transfer algorithm adds different styles to an image and what makes the algorithm unique and interesting. Both Style Transfer and traditional GANs share the similarity of being able to generate images as the output.


Two-Stage Architectural Fine-Tuning with Neural Architecture Search using Early-Stopping in Image Classification

arXiv.org Artificial Intelligence

Deep neural networks (NN) perform well in various tasks (e.g., computer vision) because of the convolutional neural networks (CNN). However, the difficulty of gathering quality data in the industry field hinders the practical use of NN. To cope with this issue, the concept of transfer learning (TL) has emerged, which leverages the fine-tuning of NNs trained on large-scale datasets in data-scarce situations. Therefore, this paper suggests a two-stage architectural fine-tuning method for image classification, inspired by the concept of neural architecture search (NAS). One of the main ideas of our proposed method is a mutation with base architectures, which reduces the search cost by using given architectural information. Moreover, an early-stopping is also considered which directly reduces NAS costs. Experimental results verify that our proposed method reduces computational and searching costs by up to 28.2% and 22.3%, compared to existing methods.


AI for painting: Unraveling Neural Style Transfer

#artificialintelligence

In a world where NFT's are being sold for millions, the next profitable business might be to create unique virtual entities and who better suited for the job than artificial intelligence. In fact, well before the NFT hype, in October 2018, the first AI-generated portrait was sold for $432,500. Since then, people have used their deep knowledge of advanced algorithms to make astounding pieces of art. Refik Anadol, for instance, is an artist who uses AI to create captivating paintings. Another digital artist, Petros Vrellis, put up an interactive animation of Van Gogh's celebrated artwork "Starry Night" in 2012, which reached over 1.5 million views within three months.


DiriNet: A network to estimate the spatial and spectral degradation functions

arXiv.org Artificial Intelligence

The spatial and spectral degradation functions are critical to hyper- and multi-spectral image fusion. However, few work has been payed on the estimation of the degradation functions. To learn the spatial response function and the point spread function from the image pairs to be fused, we propose a Dirichlet network, where both functions are properly constrained. Specifically, the spatial response function is constrained with positivity, while the Dirichlet distribution along with a total variation is imposed on the point spread function. To the best of our knowledge, the neural netwrok and the Dirichlet regularization are exclusively investigated, for the first time, to estimate the degradation functions. Both image degradation and fusion experiments demonstrate the effectiveness and superiority of the proposed Dirichlet network.


An Analysis on Ensemble Learning optimized Medical Image Classification with Deep Convolutional Neural Networks

arXiv.org Artificial Intelligence

Novel and high-performance medical image classification pipelines are heavily utilizing ensemble learning strategies. The idea of ensemble learning is to assemble diverse models or multiple predictions and, thus, boost prediction performance. However, it is still an open question to what extent as well as which ensemble learning strategies are beneficial in deep learning based medical image classification pipelines. In this work, we proposed a reproducible medical image classification pipeline for analyzing the performance impact of the following ensemble learning techniques: Augmenting, Stacking, and Bagging. The pipeline consists of state-of-the-art preprocessing and image augmentation methods as well as 9 deep convolution neural network architectures. It was applied on four popular medical imaging datasets with varying complexity. Furthermore, 12 pooling functions for combining multiple predictions were analyzed, ranging from simple statistical functions like unweighted averaging up to more complex learning-based functions like support vector machines. Our results revealed that Stacking achieved the largest performance gain of up to 13% F1-score increase. Augmenting showed consistent improvement capabilities by up to 4% and is also applicable to single model based pipelines. Cross-validation based Bagging demonstrated to be the most complex ensemble learning method, which resulted in an F1-score decrease in all analyzed datasets (up to -10%). Furthermore, we demonstrated that simple statistical pooling functions are equal or often even better than more complex pooling functions. We concluded that the integration of Stacking and Augmentation ensemble learning techniques is a powerful method for any medical image classification pipeline to improve robustness and boost performance.


Pose Estimation of Specific Rigid Objects

arXiv.org Artificial Intelligence

In this thesis, we address the problem of estimating the 6D pose of rigid objects from a single RGB or RGB-D input image, assuming that 3D models of the objects are available. This problem is of great importance to many application fields such as robotic manipulation, augmented reality, and autonomous driving. First, we propose EPOS, a method for 6D object pose estimation from an RGB image. The key idea is to represent an object by compact surface fragments and predict the probability distribution of corresponding fragments at each pixel of the input image by a neural network. Each pixel is linked with a data-dependent number of fragments, which allows systematic handling of symmetries, and the 6D poses are estimated from the links by a RANSAC-based fitting method. EPOS outperformed all RGB and most RGB-D and D methods on several standard datasets. Second, we present HashMatch, an RGB-D method that slides a window over the input image and searches for a match against templates, which are pre-generated by rendering 3D object models in different orientations. The method applies a cascade of evaluation stages to each window location, which avoids exhaustive matching against all templates. Third, we propose ObjectSynth, an approach to synthesize photorealistic images of 3D object models for training methods based on neural networks. The images yield substantial improvements compared to commonly used images of objects rendered on top of random photographs. Fourth, we introduce T-LESS, the first dataset for 6D object pose estimation that includes 3D models and RGB-D images of industry-relevant objects. Fifth, we define BOP, a benchmark that captures the status quo in the field. BOP comprises eleven datasets in a unified format, an evaluation methodology, an online evaluation system, and public challenges held at international workshops organized at the ICCV and ECCV conferences.


ExCon: Explanation-driven Supervised Contrastive Learning for Image Classification

arXiv.org Artificial Intelligence

Contrastive learning has led to substantial improvements in the quality of learned embedding representations for tasks such as image classification. However, a key drawback of existing contrastive augmentation methods is that they may lead to the modification of the image content which can yield undesired alterations of its semantics. This can affect the performance of the model on downstream tasks. Hence, in this paper, we ask whether we can augment image data in contrastive learning such that the task-relevant semantic content of an image is preserved. For this purpose, we propose to leverage saliency-based explanation methods to create content-preserving masked augmentations for contrastive learning. Our novel explanation-driven supervised contrastive learning (ExCon) methodology critically serves the dual goals of encouraging nearby image embeddings to have similar content and explanation. To quantify the impact of ExCon, we conduct experiments on the CIFAR-100 and the Tiny ImageNet datasets. We demonstrate that ExCon outperforms vanilla supervised contrastive learning in terms of classification, explanation quality, adversarial robustness as well as calibration of probabilistic predictions of the model in the context of distributional shift.