Goto

Collaborating Authors

 alg






Supplementary Material for GPEX, A Framework For Interpreting Artificial Neural Networks Amir Akbarnejad, Gilbert Bigras, Nilanjan Ray

Neural Information Processing Systems

Fig. S1: The proposed framework as a probabilistic graphical model. In this section we derive the variational lower-bound introduced in Sec.2.3 of the main article. W e firstly introduce Lemmas 1 and 2 as they appear in our derivations. As illustrated in Fig.S1, the ANN's input In Fig.S1 the lower boxes are the inducing points and other variables that determine the GPs' posterior. S1.1 Deriving the Lower-bound With Respect to the Kernel-mappings In the right-hand-side of Eq.S6 only the following terms are dependant on the kernel-mappings The first term is the expected log-likelihood of a Gaussian distribution (i.e. the conditional log-likelihood of Therefore, we can use Lemma.2 to simplify the first term: E According to Lemma.1 we have that Therefore, the KL-term of Eq.S8 is a constant with respect to the kernel mappings All in all, the lower-bound for optimizing the kernel-mappings is equal to the right-hand-side of Eq.S9 which was introduced and discussed in Sec.2.3. of the main article. S1.2 Deriving the Lower-bound With Respect to the ANN Parameters According to Eq.4 of the main article, in our formulation the ANN's parameters appear as some variational parameters. Therefore, the likelihood of all variables (Eq.S6) does not generally depend on the ANN's parameters. This likelihood turns out to be equivalent to commonly-used losses like the cross-entropy loss or the mean-squared loss. Here we elaborate upon how this happens. This conclusion was introduced and discussed in Eq.6 of the main article. W e can draw similar conclusions when the pipeline is for other tasks like regression, or even a combination of tasks.



Locally Private Parametric Methods for Change-Point Detection

Yadav, Anuj Kumar, Cadir, Cemre, Shkel, Yanina, Gastpar, Michael

arXiv.org Machine Learning

We study parametric change-point detection, where the goal is to identify distributional changes in time series, under local differential privacy. In the non-private setting, we derive improved finite-sample accuracy guarantees for a change-point detection algorithm based on the generalized log-likelihood ratio test, via martingale methods. In the private setting, we propose two locally differentially private algorithms based on randomized response and binary mechanisms, and analyze their theoretical performance. We derive bounds on detection accuracy and validate our results through empirical evaluation. Our results characterize the statistical cost of local differential privacy in change-point detection and show how privacy degrades performance relative to a non-private benchmark. As part of this analysis, we establish a structural result for strong data processing inequalities (SDPI), proving that SDPI coefficients for Rényi divergences and their symmetric variants (Jeffreys-Rényi divergences) are achieved by binary input distributions. These results on SDPI coefficients are also of independent interest, with applications to statistical estimation, data compression, and Markov chain mixing.