Goto

Collaborating Authors

 intuition


Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach

Neural Information Processing Systems

We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.


Learning towards Minimum Hyperspherical Energy

Neural Information Processing Systems

Neural networks are a powerful class of nonlinear functions that can be trained end-to-end on various applications. While the over-parametrization nature in many neural networks renders the ability to fit complex functions and the strong representation power to handle challenging tasks, it also leads to highly correlated neurons that can hurt the generalization ability and incur unnecessary computation cost. As a result, how to regularize the network to avoid undesired representation redundancy becomes an important issue. To this end, we draw inspiration from a well-known problem in physics -- Thomson problem, where one seeks to find a state that distributes N electrons on a unit sphere as evenly as possible with minimum potential energy. In light of this intuition, we reduce the redundancy regularization problem to generic energy minimization, and propose a minimum hyperspherical energy (MHE) objective as generic regularization for neural networks. We also propose a few novel variants of MHE, and provide some insights from a theoretical point of view. Finally, we apply neural networks with MHE regularization to several challenging tasks. Extensive experiments demonstrate the effectiveness of our intuition, by showing the superior performance with MHE regularization.


The moment that kicked off the AI revolution

New Scientist

Has the technology lived up to its potential? The first time that AlphaGo revealed its full power, it prompted a visceral reaction . Lee Sedol, the world's greatest player of the ancient Chinese board game Go, had grown visibly agitated at the artificial intelligence's prowess. The hushed crowd in downtown Seoul, South Korea, could barely contain its gasps. It was quickly dawning on Lee, and the tens of millions watching at home, that this AI was different to those that had come before. It wasn't just beating Lee, but it was doing so with an almost human-like aptitude.





Deep Anomaly Detection Using Geometric Transformations

Izhak Golan, Ran El-Yaniv

Neural Information Processing Systems

We consider the problem of anomaly detection in images, and present a new detectiontechnique. Givenasampleofimages,allknowntobelongtoa"normal" class (e.g., dogs), we show how to train a deep neural model that can detect out-of-distribution images (i.e., non-dog objects).