Ameet S. Talwalkar
Variable Importance Using Decision Trees
Jalil Kazemitabar, Arash Amini, Adam Bloniarz, Ameet S. Talwalkar
Decision trees and random forests are well established models that not only offer good predictive performance, but also provide rich feature importance information. While practitioners often employ variable importance methods that rely on this impurity-based information, these methods remain poorly characterized from a theoretical perspective. We provide novel insights into the performance of these methods by deriving finite sample performance guarantees in a high-dimensional setting under various modeling assumptions. We further demonstrate the effectiveness of these impurity-based methods via an extensive set of simulations.
Adaptive Gradient-Based Meta-Learning Methods
Mikhail Khodak, Maria-Florina F. Balcan, Ameet S. Talwalkar
We build a theoretical framework for designing and understanding practical metalearning methods that integrates sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms. Our approach enables the task-similarity to be learned adaptively, provides sharper transfer-risk bounds in the setting of statistical learning-to-learn, and leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure. We use our theory to modify several popular meta-learning algorithms and improve their meta-test-time performance on standard problems in few-shot learning and federated learning.
Model Agnostic Supervised Local Explanations
Gregory Plumb, Denali Molitor, Ameet S. Talwalkar
Federated Multi-Task Learning
Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet S. Talwalkar
Variable Importance Using Decision Trees
Jalil Kazemitabar, Arash Amini, Adam Bloniarz, Ameet S. Talwalkar