Not enough data to create a plot.
Try a different view from the menu above.
Benjamin Moseley
Approximation Bounds for Hierarchical Clustering: Average Linkage, Bisecting K-means, and Local Search
Benjamin Moseley, Joshua Wang
Hierarchical clustering is a data analysis method that has been used for decades. Despite its widespread use, the method has an underdeveloped analytical foundation. Having a well understood foundation would both support the currently used methods and help guide future improvements. The goal of this paper is to give an analytic framework to better understand observations seen in practice. This paper considers the dual of a problem framework for hierarchical clustering introduced by Dasgupta [Das16].
Efficient nonmyopic batch active search
Shali Jiang, Gustavo Malkomes, Matthew Abbott, Benjamin Moseley, Roman Garnett
Active search is a learning paradigm for actively identifying as many members of a given class as possible. A critical target scenario is high-throughput screening for scientific discovery, such as drug or materials discovery. In these settings, specialized instruments can often evaluate multiple points simultaneously; however, all existing work on active search focuses on sequential acquisition.
Cost Effective Active Search
Shali Jiang, Roman Garnett, Benjamin Moseley
We study a paradigm of active learning we call cost effective active search, where the goal is to find a given number of positive points from a large unlabeled pool with minimum labeling cost. Most existing methods solve this problem heuristically, and few theoretical results have been established. Here we adopt a principled Bayesian approach for the first time.
Efficient nonmyopic batch active search
Shali Jiang, Gustavo Malkomes, Matthew Abbott, Benjamin Moseley, Roman Garnett
Cost Effective Active Search
Shali Jiang, Roman Garnett, Benjamin Moseley
We study a paradigm of active learning we call cost effective active search, where the goal is to find a given number of positive points from a large unlabeled pool with minimum labeling cost. Most existing methods solve this problem heuristically, and few theoretical results have been established. Here we adopt a principled Bayesian approach for the first time.
Backprop with Approximate Activations for Memory-efficient Network Training
Ayan Chakrabarti, Benjamin Moseley
Training convolutional neural network models is memory intensive since backpropagation requires storing activations of all intermediate layers. This presents a practical concern when seeking to deploy very deep architectures in production, especially when models need to be frequently re-trained on updated datasets. In this paper, we propose a new implementation for back-propagation that significantly reduces memory usage, by enabling the use of approximations with negligible computational cost and minimal effect on training performance. The algorithm reuses common buffers to temporarily store full activations and compute the forward pass exactly. It also stores approximate per-layer copies of activations, at significant memory savings, that are used in the backward pass. Compared to simply approximating activations within standard back-propagation, our method limits accumulation of errors across layers. This allows the use of much lower-precision approximations without affecting training accuracy.