Goto

Collaborating Authors



Model Agnostic Supervised Local Explanations

Gregory Plumb, Denali Molitor, Ameet S. Talwalkar

Neural Information Processing Systems

Model interpretability is an increasingly important component of practical machine learning. Some ofthemost common forms ofinterpretability systems are example-based, local, and global explanations. One of the main challenges in interpretability isdesigning explanation systems thatcancapture aspects ofeach of these explanation types, in order to develop a more thorough understanding of the model. We address this challenge in a novel model called MAPLE that useslocallinearmodeling techniques alongwithadualinterpretation ofrandom forests (both as a supervised neighborhood approach and as a feature selection method).




d045c59a90d7587d8d671b5f5aec4e7c-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all reviewers for their constructive comments and address the raised issues below. As described in Secion 3.2 of the manuscript, we introduce the The source code, as mentioned on L141, will be made available to the public. R1: Why the adaptive flow filtering is a better way of reducing artifacts? Our method could be seen as a learnable median filter in spirit. Although the quantitative improvement from the adaptive flow filtering (ada.) is small, this component is important in generating results with higher visual quality SepConv has originally been trained on high-quality videos with large motion.


Supplemental: TrainingFullyConnectedNeuralNetworksis R-Complete A R-Membership

Neural Information Processing Systems

Membership in Ris already proven by Abrahamsen, Kleist and Miltzow in [3]. Thealgorithm then needs to verify that the neural network described byΘ fits all data points inD with a total error at mostγ. The goal of this appendix is to build a geometric understanding off(,Θ). We point the interested reader to these articles [6, 26, 49, 66, 92] investigating the set of functions exactly represented by different architecturesofReLUnetworks. To see that this observation is true, consider the following construction.




Author response for " Fixing the train-test resolution discrepancy "

Neural Information Processing Systems

We thank the reviewers for their constructive feedback on the paper. Here we answer their main questions and comments. In addition, are the results shown significant? In particular, we have evaluated our approach for transfer learning for low-resource and/or fine-grained classification. Then (3) we use our method, i.e. we fine-tune the last Finally, we applied our method to a very large ResNeXt-101 32x48d from [Mahajan et al.