Structural Neural Additive Models: Enhanced Interpretable Machine Learning

Luber, Mattias, Thielmann, Anton, Säfken, Benjamin

arXiv.org Artificial Intelligence 

Neural Additive Models (NAMs) (Agarwal et al., and have become the go-to method for problems 2021b) were recently proposed as a class of Neural Networks, requiring high-level predictive power. There has that impose an additivity constraint on the input data been extensive research on how DNNs arrive at and thus allow to directly derive the feature-wise contribution their decisions, however, the inherently uninterpretable onto the generated predictions as a function of the input networks remain up to this day mostly domain. While this indeed yields an exact representation of unobservable "black boxes". In recent years, the the decision making process, NAMs are nevertheless highly field has seen a push towards interpretable neural complex functions that are characterized by hundreds of networks, such as the visually interpretable thousands of parameters and thus fail to address additional Neural Additive Models (NAMs). We propose dimensions of interpretability (Murdoch et al., 2019). To a further step into the direction of intelligibility this end, we propose the use of Structural Neural Additive beyond the mere visualization of feature effects Models (SNAMs) as a way to achieve the same (and and propose Structural Neural Additive Models even better) predictive performance with a fraction of the (SNAMs). A modeling framework that combines parameters required, while providing intelligibility beyond classical and clearly interpretable statistical methods mere visualizations. The contributions of SNAMs can be with the predictive power of neural applications.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found