Tolland County
FedGMark: Certifiably Robust Watermarking for Federated Graph Learning Yuxin Yang 1,2 Qiang Li1 Yuan Hong 3 Binghui Wang
Federated graph learning (FedGL) is an emerging learning paradigm to collaboratively train graph data from various clients. However, during the development and deployment of FedGL models, they are susceptible to illegal copying and model theft. Backdoor-based watermarking is a well-known method for mitigating these attacks, as it offers ownership verification to the model owner. We take the first step to protect the ownership of FedGL models via backdoor-based watermarking. Existing techniques have challenges in achieving the goal: 1) they either cannot be directly applied or yield unsatisfactory performance; 2) they are vulnerable to watermark removal attacks; and 3) they lack of formal guarantees. To address all the challenges, we propose FedGMark, the first certified robust backdoor-based watermarking for FedGL. FedGMark leverages the unique graph structure and client information in FedGL to learn customized and diverse watermarks. It also designs a novel GL architecture that facilitates defending against both the empirical and theoretically worst-case watermark removal attacks.
FedGMark: Certifiably Robust Watermarking for Federated Graph Learning Yuxin Yang 1,2 Qiang Li1 Yuan Hong 3 Binghui Wang
Federated graph learning (FedGL) is an emerging learning paradigm to collaboratively train graph data from various clients. However, during the development and deployment of FedGL models, they are susceptible to illegal copying and model theft. Backdoor-based watermarking is a well-known method for mitigating these attacks, as it offers ownership verification to the model owner. We take the first step to protect the ownership of FedGL models via backdoor-based watermarking. Existing techniques have challenges in achieving the goal: 1) they either cannot be directly applied or yield unsatisfactory performance; 2) they are vulnerable to watermark removal attacks; and 3) they lack of formal guarantees. To address all the challenges, we propose FedGMark, the first certified robust backdoor-based watermarking for FedGL. FedGMark leverages the unique graph structure and client information in FedGL to learn customized and diverse watermarks. It also designs a novel GL architecture that facilitates defending against both the empirical and theoretically worst-case watermark removal attacks.
HyperGCL: Multi-Modal Graph Contrastive Learning via Learnable Hypergraph Views
Saifuddin, Khaled Mohammed, Ji, Shihao, Akbas, Esra
--Recent advancements in Graph Contrastive Learning (GCL) have demonstrated remarkable effectiveness in improving graph representations. However, relying on predefined augmentations (e.g., node dropping, edge perturbation, attribute masking) may result in the loss of task-relevant information and a lack of adaptability to diverse input data. Furthermore, the selection of negative samples remains rarely explored. In this paper, we introduce HyperGCL, a novel multimodal GCL framework from a hypergraph perspective. HyperGCL constructs three distinct hypergraph views by jointly utilizing the input graph's structure and attributes, enabling a comprehensive integration of multiple modalities in contrastive learning. A learnable adaptive topology augmentation technique enhances these views by preserving important relations and filtering out noise. View-specific encoders capture essential characteristics from each view, while a network-aware contrastive loss leverages the underlying topology to define positive and negative samples effectively. Extensive experiments on benchmark datasets demonstrate that HyperGCL achieves state-of-the-art node classification performance. Building on the success of contrastive learning (CL) in computer vision and natural language processing [1], [2], CL approaches have been extended to graph data--known as Graph Contrastive Learning (GCL)--where Graph Neural Networks (GNNs) learn robust representations by maximizing agreement between augmented graph views [3]-[6]. First, they often depend on handcrafted augmentations such as node dropping, edge perturbation, and attribute masking.
Unveiling Reasoning Thresholds in Language Models: Scaling, Fine-Tuning, and Interpretability through Attention Maps
Hsiao, Yen-Che, Dutta, Abhishek
This study investigates the in-context learning capabilities of various decoder-only transformer-based language models with different model sizes and training data, including GPT2, SmolLM2, OpenELM, TinyLlama, Stable LM, and Gemma 2. We identify a critical parameter threshold (~1.6 billion), beyond which reasoning performance improves significantly in tasks such as commonsense reasoning in multiple-choice question answering and deductive reasoning. Specifically, models above this threshold achieve better success rates in chain-of-thought (CoT) prompting for deductive reasoning tasks, especially those requiring longer reasoning chains, such as proof by contradiction and disjunction elimination. To address limitations in sub-threshold models, we demonstrate that fine-tuning with task-specific exemplars substantially enhances reasoning performance, enabling accurate CoT generation even without additional exemplars in the prompt for tasks with shorter reasoning chains. Finally, our analysis of attention maps reveals that models capable of generating correct CoTs exhibit higher token-level attention scores on subsequent correct tokens and the correct parts of speech, providing interpretability insights into reasoning processes. These findings collectively advance understanding of reasoning capabilities in decoder-only transformer-based models. The code is available at: https://github.com/AnnonymousForPapers/CoT_Reasoning_Test.
Learning Euler Factors of Elliptic Curves
Babei, Angelica, Charton, François, Costa, Edgar, Huang, Xiaoyu, Lee, Kyu-Hwan, Lowry-Duda, David, Narayanan, Ashvni, Pozdnyakov, Alexey
We apply transformer models and feedforward neural networks to predict Frobenius traces $a_p$ from elliptic curves given other traces $a_q$. We train further models to predict $a_p \bmod 2$ from $a_q \bmod 2$, and cross-analysis such as $a_p \bmod 2$ from $a_q$. Our experiments reveal that these models achieve high accuracy, even in the absence of explicit number-theoretic tools like functional equations of $L$-functions. We also present partial interpretability findings.
Mathematical Data Science
Douglas, Michael R., Lee, Kyu-Hwan
In this article we discuss an approach to doing this which one can call mathematical data science. In this paradigm, one studies mathematical objects collectively rather than individually, by creating datasets and doing machine learning experiments and interpretations. Broadly speaking, the field of data science is concerned with assembling, curating and analyzing large datasets, and developing methods which enable its users to not just answer predetermined questions about the data but to explore it, make simple descriptions and pictures, and arrive at novel insights. This certainly sounds promising as a tool for mathematical discovery! Mathematical data science is not new and has historically led to very important results. A famous example is the work of Birch and Swinnerton-Dyer leading to their conjecture [BSD65], based on computer generation of elliptic curves and linear regression analysis of the resulting data. However, the field really started to take off with the deep learning revolution and with the easy access to ML models provided by platforms such as Py-Torch and TensorFlow, and built into computer algebra systems such as Mathematica, Magma and SageMath.
Uncertainty Quantification of Wind Gust Predictions in the Northeast US: An Evidential Neural Network and Explainable Artificial Intelligence Approach
Jahan, Israt, Schreck, John S., Gagne, David John, Becker, Charlie, Astitha, Marina
Machine learning has shown promise in reducing bias in numerical weather model predictions of wind gusts. Yet, they underperform to predict high gusts even with additional observations due to the right-skewed distribution of gusts. Uncertainty quantification (UQ) addresses this by identifying when predictions are reliable or needs cautious interpretation. Using data from 61 extratropical storms in the Northeastern USA, we introduce evidential neural network (ENN) as a novel approach for UQ in gust predictions, leveraging atmospheric variables from the Weather Research and Forecasting (WRF) model as features and gust observations as targets. Explainable artificial intelligence (XAI) techniques demonstrated that key predictive features also contributed to higher uncertainty. Estimated uncertainty correlated with storm intensity and spatial gust gradients. ENN allowed constructing gust prediction intervals without requiring an ensemble. From an operational perspective, providing gust forecasts with quantified uncertainty enhances stakeholders' confidence in risk assessment and response planning for extreme gust events.
AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters
Reducing the model redundancy is an important task to deploy complex deep learning models to resource-limited or time-sensitive devices. Directly regularizing or modifying weight values makes pruning procedure less robust and sensitive to the choice of hyperparameters, and it also requires prior knowledge to tune different hyperparameters for different models. To build a better generalized and easy-to-use pruning method, we propose AutoPrune, which prunes the network through optimizing a set of trainable auxiliary parameters instead of original weights. The instability and noise during training on auxiliary parameters will not directly affect weight values, which makes pruning process more robust to noise and less sensitive to hyperparameters. Moreover, we design gradient update rules for auxiliary parameters to keep them consistent with pruning tasks. Our method can automatically eliminate network redundancy with recoverability, relieving the complicated prior knowledge required to design thresholding functions, and reducing the time for trial and error. We evaluate our method with LeNet and VGGlike on MNIST and CIFAR-10 datasets, and with AlexNet, ResNet and MobileNet on ImageNet to establish the scalability of our work. Results show that our model achieves state-of-the-art sparsity, e.g.
A Sparse Interactive Model for Matrix Completion with Side Information
Jin Lu, Guannan Liang, Jiangwen Sun, Jinbo Bi
Matrix completion methods can benefit from side information besides the partially observed matrix. The use of side features that describe the row and column entities of a matrix has been shown to reduce the sample complexity for completing the matrix. We propose a novel sparse formulation that explicitly models the interaction between the row and column side features to approximate the matrix entries. Unlike early methods, this model does not require the low rank condition on the model parameter matrix. We prove that when the side features span the latent feature space of the matrix to be recovered, the number of observed entries needed for an exact recovery is O(log N) where N is the size of the matrix. If the side features are corrupted latent features of the matrix with a small perturbation, our method can achieve an ɛ-recovery with O(log N) sample complexity.