Goto

Collaborating Authors

 Sensing and Signal Processing


Has AI Gone Too Far? - Automated Inference of Criminality Using Face Images

@machinelearnbot

Summary: This new study claims to be able to identify criminals based on their facial characteristics. Even if the data science is good has AI pushed too far into areas of societal taboos? This isn't the first time data science has been restricted in favor of social goals, but this study may be a trip wire that starts a long and difficult discussion about the role of AI. Has AI gone too far? This might seem like a nonsensical question to data scientists who strive every day to expand the capabilities of AI until you read the headlines created by this just released peer reviewed scientific paper: Automated Inference on Criminality Using Face Images (Xiaolin Wu, McMaster Univ.


junyanz/pytorch-CycleGAN-and-pix2pix

#artificialintelligence

To train a model on your own datasets, you need to create a data folder with two subdirectories trainA and trainB that contain images from domain A and B. For example, landscape painting - landscape photographs works much better than portrait painting - landscape photographs. A and B should each have their own subfolders train, val, test, etc. In /path/to/data/A/train, put training images in style A.


Conditional CycleGAN for Attribute Guided Face Image Generation

arXiv.org Machine Learning

State-of-the-art techniques in Generative Adversarial Networks (GANs) such as cycleGAN is able to learn the mapping of one image domain $X$ to another image domain $Y$ using unpaired image data. We extend the cycleGAN to ${\it Conditional}$ cycleGAN such that the mapping from $X$ to $Y$ is subjected to attribute condition $Z$. Using face image generation as an application example, where $X$ is a low resolution face image, $Y$ is a high resolution face image, and $Z$ is a set of attributes related to facial appearance (e.g. gender, hair color, smile), we present our method to incorporate $Z$ into the network, such that the hallucinated high resolution face image $Y'$ not only satisfies the low resolution constrain inherent in $X$, but also the attribute condition prescribed by $Z$. Using face feature vector extracted from face verification network as $Z$, we demonstrate the efficacy of our approach on identity-preserving face image super-resolution. Our approach is general and applicable to high-quality face image generation where specific facial attributes can be controlled easily in the automatically generated results.


Real Time Image Saliency for Black Box Classifiers

arXiv.org Machine Learning

In this work we develop a fast saliency detection method that can be applied to any differentiable image classifier. We train a masking model to manipulate the scores of the classifier by masking salient parts of the input image. Our model generalises well to unseen images and requires a single forward pass to perform saliency detection, therefore suitable for use in real-time systems. We test our approach on CIFAR-10 and ImageNet datasets and show that the produced saliency maps are easily interpretable, sharp, and free of artifacts. We suggest a new metric for saliency and test our method on the ImageNet object localisation task. We achieve results outperforming other weakly supervised methods.


How NoSQL Fundamentally Changed Machine Learning

#artificialintelligence

I would like to add on to the post. Image processing is a field that has existed on its own longer than machine learning (ie, it predates machine learning decades before), its been taught mainly as a branch of engineering (electrical & electronics) & to some lesser degree also taught in computer science & physics' courses. Its only in the last decade or so, that image processing includes machine learning topics' for image recognition & understanding. The latest edition (3rd) has an added chapter on "Object Recognition" which wasn't available in the 1st & 2nd edition. The last time I passed through my local university bookstore (about a year ago), this textbook is stocked because its still currently a prescribed textbook for final year Electrical engineering courses.


Real-Time Adaptive Image Compression

arXiv.org Machine Learning

We present a machine learning-based approach to lossy image compression which outperforms all existing codecs, while running in real-time. Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG 2000, 2 times smaller than WebP, and 1.7 times smaller than BPG on datasets of generic images across all quality levels. At the same time, our codec is designed to be lightweight and deployable: for example, it can encode or decode the Kodak dataset in around 10ms per image on GPU. Our architecture is an autoencoder featuring pyramidal analysis, an adaptive coding module, and regularization of the expected codelength. We also supplement our approach with adversarial training specialized towards use in a compression setting: this enables us to produce visually pleasing reconstructions for very low bitrates.


Locally linear representation for image clustering

arXiv.org Machine Learning

It is a key to construct a similarity graph in graph-oriented subspace learning and clustering. In a similarity graph, each vertex denotes a data point and the edge weight represents the similarity between two points. There are two popular schemes to construct a similarity graph, i.e., pairwise distance based scheme and linear representation based scheme. Most existing works have only involved one of the above schemes and suffered from some limitations. Specifically, pairwise distance based methods are sensitive to the noises and outliers compared with linear representation based methods. On the other hand, there is the possibility that linear representation based algorithms wrongly select inter-subspaces points to represent a point, which will degrade the performance. In this paper, we propose an algorithm, called Locally Linear Representation (LLR), which integrates pairwise distance with linear representation together to address the problems. The proposed algorithm can automatically encode each data point over a set of points that not only could denote the objective point with less residual error, but also are close to the point in Euclidean space. The experimental results show that our approach is promising in subspace learning and subspace clustering.


How to Achieve #DigitalTransformation @CloudExpo @DellEMC #DX #AI #IoT

#artificialintelligence

Industry after industry is under siege as companies embrace digital transformation (DX) to disrupt existing business models and disintermediate their competitor's customer relationships. But what do we mean by "Digital Transformation"? Digital Transformation The coupling of granular, real-time data (e.g., smartphones, connected devices, smart appliances, wearables, mobile commerce, video surveillance) with modern technologies (e.g., cloud native apps, Big Data architectures, hyper-converged technologies, artificial intelligence, blockchain) to enhance products, processes, and business-decision making with customer, product and operational insights. The digital transformation starts by understanding the organization's business initiatives, and then prioritizing which initiatives are top candidates for enhancement through digital transformation. "Begin with an end in mind" to quote Stephen Covey.


The Lov\'asz Hinge: A Novel Convex Surrogate for Submodular Losses

arXiv.org Machine Learning

Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel surrogate loss function for submodular losses, the Lov\'asz hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. We prove that the Lov\'asz hinge is convex and yields an extension. As a result, we have developed the first tractable convex surrogates in the literature for submodular losses. We demonstrate the utility of this novel convex surrogate through several set prediction tasks, including on the PASCAL VOC and Microsoft COCO datasets.


beamandrew/medical-data

#artificialintelligence

This is a curated list of medical data for machine learning. This list is provided for informational purposes only, please make sure you respect any and all usage restrictions for any of the data listed here. The National Library of Medicine presents MedPix Database of 53,000 medical images from 13,000 patients with annotations. These 1112 datasets are composed of structural and resting state functional MRI data along with an extensive array of phenotypic information. Also has clinical, genomic, and biomaker data. AMRG Cardiac Atlas The AMRG Cardiac MRI Atlas is a complete labelled MRI image set of a normal patient's heart acquired with the Auckland MRI Research Group's Siemens Avanto scanner.