Goto

Collaborating Authors

 White, Christopher


Learning without gradient descent encoded by the dynamics of a neurobiological model

arXiv.org Artificial Intelligence

The success of state-of-the-art machine learning is essentially all based on different variations of gradient descent algorithms that minimize some version of a cost or loss function. A fundamental limitation, however, is the need to train these systems in either supervised or unsupervised ways by exposing them to typically large numbers of training examples. Here, we introduce a fundamentally novel conceptual approach to machine learning that takes advantage of a neurobiologically derived model of dynamic signaling, constrained by the geometric structure of a network. We show that MNIST images can be uniquely encoded and classified by the dynamics of geometric networks with nearly state-of-the-art accuracy in an unsupervised way, and without the need for any training.


Inducing a hierarchy for multi-class classification problems

arXiv.org Machine Learning

In applications where categorical labels follow a natural hierarchy, classification methods that exploit the label structure often outperform those that do not. Unfortunately, the majority of classification datasets do not come pre-equipped with a hierarchical structure and classical "flat" classifiers must be employed. In this paper, we investigate a class of methods that induce a hierarchy that can similarly improve classification performance over flat classifiers. The class of methods follows the structure of first clustering the conditional distributions and subsequently using a hierarchical classifier with the induced hierarchy. We demonstrate the effectiveness of the class of methods both for discovering a latent hierarchy and for improving accuracy in principled simulation settings and three real data applications. Machine learning practitioners are often challenged with the task of classifying an object as one of tens or hundreds of classes. To address these problems, algorithms originally designed for binary or small multi-class problems are applied and naively deployed. In some instances the large set of labels comes pre-equipped with a hierarchical structure - that is, some labels are known to be mutually semantically similar to various degrees.


Vertex Nomination, Consistent Estimation, and Adversarial Modification

arXiv.org Machine Learning

Given a pair of graphs $G_1$ and $G_2$ and a vertex set of interest in $G_1$, the vertex nomination problem seeks to find the corresponding vertices of interest in $G_2$ (if they exist) and produce a rank list of the vertices in $G_2$, with the corresponding vertices of interest in $G_2$ concentrating, ideally, at the top of the rank list. In this paper we study the effect of an adversarial contamination model on the performance of a spectral graph embedding-based vertex nomination scheme. In both real and simulated examples, we demonstrate that this vertex nomination scheme performs effectively in the uncontaminated setting; adversarial network contamination adversely impacts the performance of our VN scheme; and network regularization successfully mitigates the impact of the contamination. In addition to furthering the theoretic basis of consistency in vertex nomination, the adversarial noise model posited herein is grounded in theoretical developments that allow us to frame the role of an adversary in terms of maximal vertex nomination consistency classes.