Goto

Collaborating Authors

 dataset 1


FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models Supplementary Materials 1 Dataset 1.1 Links and Preservation

Neural Information Processing Systems

The croissant metadata record is available at croissant. We chose GitHub and Google Drive respectively to store our code and dataset. Both are widely recognized as reliable data storage platforms, ensuring long-term preservation. We highly recommend downloading the raw data directly and following the provided instructions to simplify the data processing steps. Our dataset is structured as follows: the local directory contains client-specific data for local training, while all clients aggregates data from all clients for federated learning.


QuanvNeXt: An end-to-end quanvolutional neural network for EEG-based detection of major depressive disorder

Orka, Nabil Anan, Haque, Ehtashamul, Jannat, Maftahul, Awal, Md Abdul, Moni, Mohammad Ali

arXiv.org Artificial Intelligence

This study presents QuanvNeXt, an end-to-end fully quanvolutional model for EEG-based depression diagnosis. QuanvNeXt incorporates a novel Cross Residual block, which reduces feature homogeneity and strengthens cross-feature relationships while retaining parameter efficiency. We evaluated QuanvNeXt on two open-source datasets, where it achieved an average accuracy of 93.1% and an average AUC-ROC of 97.2%, outperforming state-of-the-art baselines such as InceptionTime (91.7% accuracy, 95.9% AUC-ROC). An uncertainty analysis across Gaussian noise levels demonstrated well-calibrated predictions, with ECE scores remaining low (0.0436, Dataset 1) to moderate (0.1159, Dataset 2) even at the highest perturbation (ε = 0.1). Additionally, a post-hoc explainable AI analysis confirmed that QuanvNeXt effectively identifies and learns spectrotemporal patterns that distinguish between healthy controls and major depressive disorder. Overall, QuanvNeXt establishes an efficient and reliable approach for EEG-based depression diagnosis.


An Additive Manufacturing Part Qualification Framework: Transferring Knowledge of Stress-strain Behaviors from Additively Manufactured Polymers to Metals

Duan, Chenglong, Wu, Dazhong

arXiv.org Artificial Intelligence

Part qualification is crucial in additive manufacturing (AM) because it ensures that additively manufactured parts can be consistently produced and reliably used in critical applications. Part qualification aims at verifying that an additively manufactured part meets performance requirements; therefore, predicting the complex stress-strain behaviors of additively manufactured parts is critical. We develop a dynamic time warping (DTW)-transfer learning (TL) framework for additive manufacturing part qualification by transferring knowledge of the stress-strain behaviors of additively manufactured low-cost polymers to metals. Specifically, the framework employs DTW to select a polymer dataset as the source domain that is the most relevant to the target metal dataset. Using a long short-term memory (LSTM) model, four source polymers (i.e., Nylon, PLA, CF-ABS, and Resin) and three target metals (i.e., AlSi10Mg, Ti6Al4V, and carbon steel) that are fabricated by different AM techniques are utilized to demonstrate the effectiveness of the DTW-TL framework. Experimental results show that the DTW-TL framework identifies the closest match between polymers and metals to select one single polymer dataset as the source domain. The DTW-TL model achieves the lowest mean absolute percentage error of 12.41% and highest coefficient of determination of 0.96 when three metals are used as the target domain, respectively, outperforming the vanilla LSTM model without TL as well as the TL model pre-trained on four polymer datasets as the source domain.


A Privacy-Preserving Federated Framework with Hybrid Quantum-Enhanced Learning for Financial Fraud Detection

Sawaika, Abhishek, Krishna, Swetang, Tomar, Tushar, Suggisetti, Durga Pritam, Lal, Aditi, Shrivastav, Tanmaya, Innan, Nouhaila, Shafique, Muhammad

arXiv.org Artificial Intelligence

Rapid growth of digital transactions has led to a surge in fraudulent activities, challenging traditional detection methods in the financial sector. To tackle this problem, we introduce a specialised federated learning framework that uniquely combines a quantum-enhanced Long Short-Term Memory (LSTM) model with advanced privacy preserving techniques. By integrating quantum layers into the LSTM architecture, our approach adeptly captures complex cross-transactional patters, resulting in an approximate 5% performance improvement across key evaluation metrics compared to conventional models. Central to our framework is "FedRansel", a novel method designed to defend against poisoning and inference attacks, thereby reducing model degradation and inference accuracy by 4-8%, compared to standard differential privacy mechanisms. This pseudo-centralised setup with a Quantum LSTM model, enhances fraud detection accuracy and reinforces the security and confidentiality of sensitive financial data.


IQNN-CS: Interpretable Quantum Neural Network for Credit Scoring

Khan, Abdul Samad, Innan, Nouhaila, Khalique, Aeysha, Shafique, Muhammad

arXiv.org Artificial Intelligence

Credit scoring is a high-stakes task in financial services, where model decisions directly impact individuals' access to credit and are subject to strict regulatory scrutiny. While Quantum Machine Learning (QML) offers new computational capabilities, its black-box nature poses challenges for adoption in domains that demand transparency and trust. In this work, we present IQNN-CS, an interpretable quantum neural network framework designed for multiclass credit risk classification. The architecture combines a variational QNN with a suite of post-hoc explanation techniques tailored for structured data. To address the lack of structured interpretability in QML, we introduce Inter-Class Attribution Alignment (ICAA), a novel metric that quantifies attribution divergence across predicted classes, revealing how the model distinguishes between credit risk categories. Evaluated on two real-world credit datasets, IQNN-CS demonstrates stable training dynamics, competitive predictive performance, and enhanced interpretability. Our results highlight a practical path toward transparent and accountable QML models for financial decision-making.


FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models Supplementary Materials 1 Dataset 1.1 Links and Preservation

Neural Information Processing Systems

The croissant metadata record is available at croissant. We chose GitHub and Google Drive respectively to store our code and dataset. Both are widely recognized as reliable data storage platforms, ensuring long-term preservation. We highly recommend downloading the raw data directly and following the provided instructions to simplify the data processing steps. Our dataset is structured as follows: the local directory contains client-specific data for local training, while all clients aggregates data from all clients for federated learning.


1 Additional Results 1 1.1 Synthetically generated partially annotated datasets 2 1.1.1 Knowledge-graph based partially annotated dataset generation

Neural Information Processing Systems

We use the two highest frequency ones which result in 776 label categories. Let us consider two datasets as shown in Figure 1. Let's say that the label Fine-grained mismatch problem: This problem occurs when a parent label ( e.g . We use the CIFAR100 [8] and MS COCO panoptic segmentation [7] datasets for this purpose. The third row has the similar thing, but for Dataset 2. Roughly Dataset 1 have 3x more data as the Dataset 2, with a total of 45k images across both.


AI-CNet3D: An Anatomically-Informed Cross-Attention Network with Multi-Task Consistency Fine-tuning for 3D Glaucoma Classification

Kenia, Roshan, Li, Anfei, Srivastava, Rishabh, Thakoor, Kaveri A.

arXiv.org Artificial Intelligence

Glaucoma is a progressive eye disease that leads to optic nerve damage, causing irreversible vision loss if left untreated. Optical coherence tomography (OCT) has become a crucial tool for glaucoma diagnosis, offering high-resolution 3D scans of the retina and optic nerve. However, the conventional practice of condensing information from 3D OCT volumes into 2D reports often results in the loss of key structural details. To address this, we propose a novel hybrid deep learning model that integrates cross-attention mechanisms into a 3D convolutional neural network (CNN), enabling the extraction of critical features from the superior and inferior hemiretinas, as well as from the optic nerve head (ONH) and macula, within OCT volumes. We introduce Channel Attention REpresentations (CAREs) to visualize cross-attention outputs and leverage them for consistency-based multi-task fine-tuning, aligning them with Gradient-Weighted Class Activation Maps (Grad-CAMs) from the CNN's final convolutional layer to enhance performance, interpretability, and anatomical coherence. We have named this model AI-CNet3D (AI-`See'-Net3D) to reflect its design as an Anatomically-Informed Cross-attention Network operating on 3D data. By dividing the volume along two axes and applying cross-attention, our model enhances glaucoma classification by capturing asymmetries between the hemiretinal regions while integrating information from the optic nerve head and macula. We validate our approach on two large datasets, showing that it outperforms state-of-the-art attention and convolutional models across all key metrics. Finally, our model is computationally efficient, reducing the parameter count by one-hundred--fold compared to other attention mechanisms while maintaining high diagnostic performance and comparable GFLOPS.


Physics-informed time series analysis with Kolmogorov-Arnold Networks under Ehrenfest constraints

Sen, Abhijit, Lukin, Illya V., Jacobs, Kurt, Kaplan, Lev, Sotnikov, Andrii G., Bondar, Denys I.

arXiv.org Artificial Intelligence

The prediction of quantum dynamical responses lies at the heart of modern physics. Yet, modeling these time-dependent behaviors remains a formidable challenge because quantum systems evolve in high-dimensional Hilbert spaces, often rendering traditional numerical methods computationally prohibitive. While large language models have achieved remarkable success in sequential prediction, quantum dynamics presents a fundamentally different challenge: forecasting the entire temporal evolution of quantum systems rather than merely the next element in a sequence. Existing neural architectures such as recurrent and convolutional networks often require vast training datasets and suffer from spurious oscillations that compromise physical interpretability. In this work, we introduce a fundamentally new approach: Kolmogorov Arnold Networks (KANs) augmented with physics-informed loss functions that enforce the Ehrenfest theorems. Our method achieves superior accuracy with significantly less training data: it requires only 5.4 percent of the samples (200) compared to Temporal Convolution Networks (3,700). We further introduce the Chain of KANs, a novel architecture that embeds temporal causality directly into the model design, making it particularly well-suited for time series modeling. Our results demonstrate that physics-informed KANs offer a compelling advantage over conventional black-box models, maintaining both mathematical rigor and physical consistency while dramatically reducing data requirements.


Architectural change in neural networks using fuzzy vertex pooling

Ali, Shanookha, Niralda, Nitha, Mathew, Sunil

arXiv.org Artificial Intelligence

The process of pooling vertices involves the creation of a new vertex, which becomes adjacent to all the vertices that were originally adjacent to the endpoints of the vertices being pooled. After this, the endpoints of these vertices and all edges connected to them are removed. In this document, we introduce a formal framework for the concept of fuzzy vertex pooling (FVP) and provide an overview of its key properties with its applications to neural networks. The pooling model demonstrates remarkable efficiency in minimizing loss rapidly while maintaining competitive accuracy, even with fewer hidden layer neurons. However, this advantage diminishes over extended training periods or with larger datasets, where the model's performance tends to degrade. This study highlights the limitations of pooling in later stages of deep learning training, rendering it less effective for prolonged or large-scale applications. Consequently, pooling is recommended as a strategy for early-stage training in advanced deep learning models to leverage its initial efficiency.