Explainable Classification Techniques for Quantum Dot Device Measurements
Schug, Daniel, Kovach, Tyler J., Wolfe, M. A., Benson, Jared, Park, Sanghyeok, Dodson, J. P., Corrigan, J., Eriksson, M. A., Zwolak, Justyna P.
–arXiv.org Artificial Intelligence
There has been a longstanding trade-off between the accuracy of a candidate machine learning (ML) model and its Our previous work developed a methodology that addresses interpretability. This is evident in the extreme example of some of these concerns by combining vectorization deep neural networks (DNNs), which can offer excellent methods to image data with EBMs. The possibility of using accuracy for many problems but are limited in their interpretability EBMs as models for image data poses numerous challenges, due to the number of inaccessible layers. Alternatively, the principal of which is the mapping from images there are simple techniques, such as linear models or to a vector representation that could then be used directly decision trees, that offer the user full comprehension of the with EBMs. In our previous work, we used the Gabor internal weights. However, these are often unable to model Wavelet transform in conjunction with a constrained optimization the complex relationships seen in modern datasets. For tabular procedure to extract key image features from the data, there has been considerable progress toward finding data (Schug et al. 2024). We also applied a highly custom a middle ground, typically through explaining complex feature engineering to tailor this process to the particular models with surrogates such as LIME (Ribeiro, Singh, and dataset (Schug et al. 2023). In both cases, we relied on domain Guestrin 2016) and Shapley (Lundberg and Lee 2017).
arXiv.org Artificial Intelligence
May-7-2024