Goto

Collaborating Authors

 characteristic







Locally Interpretable Individualized Treatment Rules for Black-Box Decision Models

Charvadeh, Yasin Khadem, Panageas, Katherine S., Chen, Yuan

arXiv.org Machine Learning

Existing methods typically rely on either interpretable but inflexible models or highly flexible black-box approaches that sacrifice interpretability; moreover, most impose a single global decision rule across patients. We introduce the Locally Interpretable Individualized Treatment Rule (LI-ITR) method, which combines flexible machine learning models to accurately learn complex treatment outcomes with locally interpretable approximations to construct subject-specific treatment rules. LI-ITR employs variational autoencoders to generate realistic local synthetic samples and learns individualized decision rules through a mixture of interpretable experts. Simulation studies show that LI-ITR accurately recovers true subject-specific local coefficients and optimal treatment strategies. An application to precision side-effect management in breast cancer illustrates the necessity of flexible predictive modeling and highlights the practical utility of LI-ITR in estimating optimal treatment rules while providing transparent, clinically interpretable explanations.



MetricFormer

Neural Information Processing Systems

Similarity learning can be significantly advanced by informative relationships among different samples and features. The current methods try to excavate the multiple correlations indifferent aspects, butcannot integratethemintoaunified framework. In this paper,we provide to consider the multiple correlations from a unified perspective and propose a new method called MetricFormer, which can effectively capture and model the multiple correlations with an elaborate metric transformer.