Goto

Collaborating Authors

 technical approach


Explaining Deep Learning Models -- A Bayesian Non-parametric Approach

Neural Information Processing Systems

Understanding and interpreting how machine learning (ML) models make decisions have been a big challenge. While recent research has proposed various technical approaches to provide some clues as to how an ML model makes individual predictions, they cannot provide users with an ability to inspect a model as a complete entity. In this work, we propose a novel technical approach that augments a Bayesian non-parametric regression mixture model with multiple elastic nets. Using the enhanced mixture model, we can extract generalizable insights for a target model through a global approximation. To demonstrate the utility of our approach, we evaluate it on different ML models in the context of image recognition. The empirical results indicate that our proposed approach not only outperforms the state-of-the-art techniques in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of the target ML models.


Explaining Deep Learning Models -- A Bayesian Non-parametric Approach

Wenbo Guo, Sui Huang, Yunzhe Tao, Xinyu Xing, Lin Lin

Neural Information Processing Systems

While recent research hasproposed various technical approaches to provide some clues as to how an ML model makes individual predictions, they cannot provide users with an ability to inspect a model as a completeentity.


Explaining Deep Learning Models -- A Bayesian Non-parametric Approach

Neural Information Processing Systems

Understanding and interpreting how machine learning (ML) models make decisions have been a big challenge. While recent research has proposed various technical approaches to provide some clues as to how an ML model makes individual predictions, they cannot provide users with an ability to inspect a model as a complete entity. In this work, we propose a novel technical approach that augments a Bayesian non-parametric regression mixture model with multiple elastic nets. Using the enhanced mixture model, we can extract generalizable insights for a target model through a global approximation. To demonstrate the utility of our approach, we evaluate it on different ML models in the context of image recognition. The empirical results indicate that our proposed approach not only outperforms the state-of-the-art techniques in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of the target ML models.


Explaining Deep Learning Models -- A Bayesian Non-parametric Approach

Guo, Wenbo, Huang, Sui, Tao, Yunzhe, Xing, Xinyu, Lin, Lin

Neural Information Processing Systems

Understanding and interpreting how machine learning (ML) models make decisions have been a big challenge. While recent research has proposed various technical approaches to provide some clues as to how an ML model makes individual predictions, they cannot provide users with an ability to inspect a model as a complete entity. In this work, we propose a novel technical approach that augments a Bayesian non-parametric regression mixture model with multiple elastic nets. Using the enhanced mixture model, we can extract generalizable insights for a target model through a global approximation. To demonstrate the utility of our approach, we evaluate it on different ML models in the context of image recognition.


q-and-a.html

#artificialintelligence

For example, in the 1980s and 1990s one often saw articles confusing AI with rule-based expert systems; in the 2010s, one sees AI being confused with many-layered convolutional neural networks. The field of AI studies the general problem of creating intelligence in machines; it is not a specific technical product arising from research on that problem. For example, it's common to see authors identifying AI with symbolic or logical approaches and contrasting AI with "other approaches" such as neural nets or genetic programming. AI is not an approach, it's a problem. Any approach to the problem counts as a contribution to AI.