Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
Sanjoy Dasgupta
An adaptive nearest neighbor rule for classification
Akshay Balsubramani, Sanjoy Dasgupta, yoav Freund, Shay Moran
We introduce a variant of the k-nearest neighbor classifier in which k is chosen adaptively for each query, rather than being supplied as a parameter. The choice of k depends on properties of each neighborhood, and therefore may significantly vary between different points. For example, the algorithm will use larger k for predicting the labels of points in noisy regions. We provide theory and experiments that demonstrate that the algorithm performs comparably to, and sometimes better than, k-NN with an optimal choice of k. In particular, we bound the convergence rate of our classifier in terms of a local quantity we call the "advantage", giving results that are both more general and more accurate than the smoothness-based bounds of earlier nearest neighbor work. Our analysis uses a variant of the uniform convergence theorem of Vapnik-Chervonenkis that is for empirical estimates of conditional probabilities and may be of independent interest.
Learning from discriminative feature feedback
Sanjoy Dasgupta, Akansha Dey, Nicholas Roberts, Sivan Sabato
Interactive Structure Learning with Structural Query-by-Committee
Christopher Tosh, Sanjoy Dasgupta
In this work, we introduce interactive structure learning, a framework that unifies many different interactive learning tasks. We present a generalization of the queryby-committee active learning algorithm for this setting, and we study its consistency and rate of convergence, both theoretically and empirically, with and without noise.
An adaptive nearest neighbor rule for classification
Akshay Balsubramani, Sanjoy Dasgupta, yoav Freund, Shay Moran
We introduce a variant of the k-nearest neighbor classifier in which k is chosen adaptively for each query, rather than being supplied as a parameter. The choice of k depends on properties of each neighborhood, and therefore may significantly vary between different points. For example, the algorithm will use larger k for predicting the labels of points in noisy regions. We provide theory and experiments that demonstrate that the algorithm performs comparably to, and sometimes better than, k-NN with an optimal choice of k. In particular, we bound the convergence rate of our classifier in terms of a local quantity we call the "advantage", giving results that are both more general and more accurate than the smoothness-based bounds of earlier nearest neighbor work. Our analysis uses a variant of the uniform convergence theorem of Vapnik-Chervonenkis that is for empirical estimates of conditional probabilities and may be of independent interest.
Learning from discriminative feature feedback
Sanjoy Dasgupta, Akansha Dey, Nicholas Roberts, Sivan Sabato
We consider the problem of learning a multi-class classifier from labels as well as simple explanations that we call discriminative features. We show that such explanations can be provided whenever the target concept is a decision tree, or can be expressed as a particular type of multi-class DNF formula. We present an efficient online algorithm for learning from such feedback and we give tight bounds on the number of mistakes made during the learning process. These bounds depend only on the representation size of the target concept and not on the overall number of available features, which could be infinite. We also demonstrate the learning procedure experimentally.
Interactive Structure Learning with Structural Query-by-Committee
Christopher Tosh, Sanjoy Dasgupta
In this work, we introduce interactive structure learning, a framework that unifies many different interactive learning tasks. We present a generalization of the queryby-committee active learning algorithm for this setting, and we study its consistency and rate of convergence, both theoretically and empirically, with and without noise.