Goto

Collaborating Authors

 dra



Appendix A Algorithm details

Neural Information Processing Systems

A.1 GLASS Algorithm 1 GAN-based latent space search attack ( GLASS) Require: A standard ResNet-18 network is divided into blocks, as shown in Figure 8. From Similarly, for GLASS, we set the learning rate to 1e-2 and the number of iterations to 20,000. Regarding IN, we selected a learning rate of 1e-3 and performed 30 training epochs. The accuracy of each defended model and its corresponding defense hyperparameters are shown in Table 3. Table 3: Details of defense hyperparameters (we set the split point uniformly to Block3). We train 50 distributions for Shredder, maintaining an accuracy of over 77% for all of them. As Figure 10 shows, the upper left curve implies a better privacy-utility trade-off. NoPeek and DISCO achieve the optimal defensive effect on almost all DRAs.


GAN You See Me? Enhanced Data Reconstruction Attacks against Split Inference Ziang Li1, Mengda Y ang

Neural Information Processing Systems

To overcome these challenges, we propose a G AN-based LA tent S pace S earch attack ( GLASS) that harnesses abundant prior knowledge from public data using advanced StyleGAN technologies. Additionally, we introduce GLASS++ to enhance reconstruction stability.




GAN You See Me? Enhanced Data Reconstruction Attacks against Split Inference

Neural Information Processing Systems

Split Inference (SI) is an emerging deep learning paradigm that addresses computational constraints on edge devices and preserves data privacy through collaborative edge-cloud approaches. However, SI is vulnerable to Data Reconstruction Attacks (DRA), which aim to reconstruct users' private prediction instances. Existing attack methods suffer from various limitations. Optimization-based DRAs do not leverage public data effectively, while Learning-based DRAs depend heavily on auxiliary data quantity and distribution similarity. Consequently, these approaches yield unsatisfactory attack results and are sensitive to defense mechanisms. To overcome these challenges, we propose a GAN-based LAtent Space Search attack (GLASS) that harnesses abundant prior knowledge from public data using advanced StyleGAN technologies. Additionally, we introduce GLASS++ to enhance reconstruction stability. Our approach represents the first GAN-based DRA against SI, and extensive evaluation across different split points and adversary setups demonstrates its state-of-the-art performance. Moreover, we thoroughly examine seven defense mechanisms, highlighting our method's capability to reveal private information even in the presence of these defenses.


Extracting Robust Register Automata from Neural Networks over Data Sequences

Hong, Chih-Duo, Jiang, Hongjian, Lin, Anthony W., Markgraf, Oliver, Parsert, Julian, Tan, Tony

arXiv.org Artificial Intelligence

Automata extraction is a method for synthesising interpretable surrogates for black-box neural models that can be analysed symbolically. Existing techniques assume a finite input alphabet, and thus are not directly applicable to data sequences drawn from continuous domains. We address this challenge with deterministic register automata (DRAs), which extend finite automata with registers that store and compare numeric values. Our main contribution is a framework for robust DRA extraction from black-box models: we develop a polynomial-time robustness checker for DRAs with a fixed number of registers, and combine it with passive and active automata learning algorithms. This combination yields surrogate DRAs with statistical robustness and equivalence guarantees. As a key application, we use the extracted automata to assess the robustness of neural networks: for a given sequence and distance metric, the DRA either certifies local robustness or produces a concrete counterexample. Experiments on recurrent neural networks and transformer architectures show that our framework reliably learns accurate automata and enables principled robustness evaluation. Overall, our results demonstrate that robust DRA extraction effectively bridges neural network interpretability and formal reasoning without requiring white-box access to the underlying network.


Private Frequency Estimation Via Residue Number Systems

Arcolezi, Héber H.

arXiv.org Artificial Intelligence

We present \textsf{ModularSubsetSelection} (MSS), a new algorithm for locally differentially private (LDP) frequency estimation. Given a universe of size $k$ and $n$ users, our $\varepsilon$-LDP mechanism encodes each input via a Residue Number System (RNS) over $\ell$ pairwise-coprime moduli $m_0, \ldots, m_{\ell-1}$, and reports a randomly chosen index $j \in [\ell]$ along with the perturbed residue using the statistically optimal \textsf{SubsetSelection} (SS) (Wang et al. 2016). This design reduces the user communication cost from $Θ\bigl(ω\log_2(k/ω)\bigr)$ bits required by standard SS (with $ω\approx k/(e^\varepsilon+1)$) down to $\lceil \log_2 \ell \rceil + \lceil \log_2 m_j \rceil$ bits, where $m_j < k$. Server-side decoding runs in $Θ(n + r k \ell)$ time, where $r$ is the number of LSMR (Fong and Saunders 2011) iterations. In practice, with well-conditioned moduli (\textit{i.e.}, constant $r$ and $\ell = Θ(\log k)$), this becomes $Θ(n + k \log k)$. We prove that MSS achieves worst-case MSE within a constant factor of state-of-the-art protocols such as SS and \textsf{ProjectiveGeometryResponse} (PGR) (Feldman et al. 2022) while avoiding the algebraic prerequisites and dynamic-programming decoder required by PGR. Empirically, MSS matches the estimation accuracy of SS, PGR, and \textsf{RAPPOR} (Erlingsson, Pihur, and Korolova 2014) across realistic $(k, \varepsilon)$ settings, while offering faster decoding than PGR and shorter user messages than SS. Lastly, by sampling from multiple moduli and reporting only a single perturbed residue, MSS achieves the lowest reconstruction-attack success rate among all evaluated LDP protocols.



Appendix A Algorithm details

Neural Information Processing Systems

A.1 GLASS Algorithm 1 GAN-based latent space search attack ( GLASS) Require: A standard ResNet-18 network is divided into blocks, as shown in Figure 8. From Similarly, for GLASS, we set the learning rate to 1e-2 and the number of iterations to 20,000. Regarding IN, we selected a learning rate of 1e-3 and performed 30 training epochs. The accuracy of each defended model and its corresponding defense hyperparameters are shown in Table 3. Table 3: Details of defense hyperparameters (we set the split point uniformly to Block3). We train 50 distributions for Shredder, maintaining an accuracy of over 77% for all of them. As Figure 10 shows, the upper left curve implies a better privacy-utility trade-off. NoPeek and DISCO achieve the optimal defensive effect on almost all DRAs.