Goto

Collaborating Authors

 training graph


A Appendix

Neural Information Processing Systems

A.1 Prototype-based Graph Information Bottleneck - Eq. 4 From Eq. 3, the GIB objective is: min We perform ablation studies to examine the effectiveness of our model (i.e., PGIB and PGIB In Figure 7, the " with all " setting represents our final model that includes all the components. We conduct experiments on graph classification using different readout functions for PGIB. We illustrate the reasoning process on two datasets, i.e., MUT AG and BA2Motif, in Figure 8. PGIB Then, PGIB computes the "points contributed" to predicting each class by multiplying the similarity We have conducted additional qualitative analysis. It is crucial that the prototypes not only contain key structural information from the input graph but also ensure a certain level of diversity since each class is represented by multiple prototypes. Its goal is to make the masked subgraph's prediction as close as possible to the original graph, which helps to detect substructures significant




A Differentiable Logical Operators T-norms (>

Neural Information Processing Systems

Fuzzy operators can be applied to vectors of continuous values within a certain range, e.g., Different fuzzy logics implement different t-norms and t-conorms. NodePiece-QE are reported in Table 13 in Appedix D . We sampled 9 datasets (used in Section 5.2 and Section 5.3) from the original FB15k-237 [ 29 ] with Creation details are provided in the Section 5.1 and statistics on the We use those queries in Section 5.5 to Table 5: Statistics on sampled queries for each dataset ratio and query type. Furthermore, for the experiment in Section 5.3 to measure the abilities of inductive models to find Most queries (except 2i,3i) have new answer sets. Most queries have new answer sets.






Safeguarding Graph Neural Networks against Topology Inference Attacks

Fu, Jie, Hong, Yuan, Chen, Zhili, Wang, Wendy Hui

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data. However, their widespread adoption has raised serious privacy concerns. While prior research has primarily focused on edge-level privacy, a critical yet underexplored threat lies in topology privacy - the confidentiality of the graph's overall structure. In this work, we present a comprehensive study on topology privacy risks in GNNs, revealing their vulnerability to graph-level inference attacks. To this end, we propose a suite of Topology Inference Attacks (TIAs) that can reconstruct the structure of a target training graph using only black-box access to a GNN model. Our findings show that GNNs are highly susceptible to these attacks, and that existing edge-level differential privacy mechanisms are insufficient as they either fail to mitigate the risk or severely compromise model accuracy. To address this challenge, we introduce Private Graph Reconstruction (PGR), a novel defense framework designed to protect topology privacy while maintaining model accuracy. PGR is formulated as a bi-level optimization problem, where a synthetic training graph is iteratively generated using meta-gradients, and the GNN model is concurrently updated based on the evolving graph. Extensive experiments demonstrate that PGR significantly reduces topology leakage with minimal impact on model accuracy. Our code is available at https://github.com/JeffffffFu/PGR.


WST: Weakly Supervised Transducer for Automatic Speech Recognition

Gao, Dongji, Liao, Chenda, Liu, Changliang, Wiesner, Matthew, Garcia, Leibny Paola, Povey, Daniel, Khudanpur, Sanjeev, Wu, Jian

arXiv.org Artificial Intelligence

The Recurrent Neural Network-Transducer (RNN-T) is widely adopted in end-to-end (E2E) automatic speech recognition (ASR) tasks but depends heavily on large-scale, high-quality annotated data, which are often costly and difficult to obtain. To mitigate this reliance, we propose a Weakly Supervised Transducer (WST), which integrates a flexible training graph designed to robustly handle errors in the transcripts without requiring additional confidence estimation or auxiliary pre-trained models. Empirical evaluations on synthetic and industrial datasets reveal that WST effectively maintains performance even with transcription error rates of up to 70%, consistently outperforming existing Connectionist Temporal Classification (CTC)-based weakly supervised approaches, such as Bypass Temporal Classification (BTC) and Omni-Temporal Classification (OTC). These results demonstrate the practical utility and robustness of WST in realistic ASR settings. The implementation will be publicly available.