Goto

Collaborating Authors

 interconnection


WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models

Neural Information Processing Systems

The need for effective unlearning mechanisms in large language models (LLMs) is increasingly urgent, driven by the necessity to adhere to data regulations and foster ethical generative AI practices. LLM unlearning is designed to reduce the impact of undesirable data influences and associated model capabilities without diminishing the utility of the model if unrelated to the information being forgotten. Despite growing interest, much of the existing research has focused on varied unlearning method designs to boost effectiveness and efficiency. However, the inherent relationship between model weights and LLM unlearning has not been extensively examined. In this paper, we systematically explore how model weights interact with unlearning processes in LLMs and we design the weight attribution-guided LLM unlearning method, WAGLE, which unveils the interconnections between'influence' of weights and'influence' of data to forget and retain in LLM generation. By strategically guiding the LLM unlearning across different types of unlearning methods and tasks, WAGLE can erase the undesired content, while maintaining the performance of the original tasks. We refer to the weight attribution-guided LLM unlearning method as WAGLE, which unveils the interconnections between'influence' of weights and'influence' of data to forget and retain in LLM generation.


ARMA Nets: Expanding Receptive Field for Dense Prediction

Neural Information Processing Systems

Global information is essential for dense prediction problems, whose goal is to compute a discrete or continuous label for each pixel in the images. Traditional convolutional layers in neural networks, initially designed for image classification, are restrictive in these problems since the filter size limits their receptive fields. In this work, we propose to replace any traditional convolutional layer with an autoregressive moving-average (ARMA) layer, a novel module with an adjustable receptive field controlled by the learnable autoregressive coefficients. Compared with traditional convolutional layers, our ARMA layer enables explicit interconnections of the output neurons and learns its receptive field by adapting the autoregressive coefficients of the interconnections. ARMA layer is adjustable to different types of tasks: for tasks where global information is crucial, it is capable of learning relatively large autoregressive coefficients to allow for an output neuron's receptive field covering the entire input; for tasks where only local information is required, it can learn small or near zero autoregressive coefficients and automatically reduces to a traditional convolutional layer. We show both theoretically and empirically that the effective receptive field of networks with ARMA layers (named ARMA networks) expands with larger autoregressive coefficients. We also provably solve the instability problem of learning and prediction in the ARMA layer through a re-parameterization mechanism. Additionally, we demonstrate that ARMA networks substantially improve their baselines on challenging dense prediction tasks, including video prediction and semantic segmentation.


Fuzzy Hierarchical Multiplex

Kafantaris, Alexis

arXiv.org Artificial Intelligence

This paper analyzes a fuzzy multiplex from a logical perspective in a way that has not been formalized so far. A fuzzy multiplex is a nested structure with inner nodes representing sub-system level agent traits and with outer nodes representing system agents; all while the ensemble is the system under consideration. Moreover, a mathematical framework is necessary to describe that structure which is formulated and then utilized. The system is firstly initialized using fuzzy set theory [2], inspired by Fuzzy Cognitive Maps [1]. Then a criterion that describes the structure is devised to implement a multiplex instead of a map [7] [8], and lastly system optimization is achieved. Furthermore, the theoretical context behind the multiplex is expounded in an attempt to establish a formal way of handling implications within a closed system using human intelligence. The paper is organized in sections following the reasoning process behind this unique idea. 1


ARMA Nets: Expanding Receptive Field for Dense Prediction

Neural Information Processing Systems

Global information is essential for dense prediction problems, whose goal is to compute a discrete or continuous label for each pixel in the images. Traditional convolutional layers in neural networks, initially designed for image classification, are restrictive in these problems since the filter size limits their receptive fields. In this work, we propose to replace any traditional convolutional layer with an autoregressive moving-average (ARMA) layer, a novel module with an adjustable receptive field controlled by the learnable autoregressive coefficients.


Revealing Interconnections between Diseases: from Statistical Methods to Large Language Models

Ermilova, Alina, Kornilov, Dmitrii, Samoilova, Sofia, Laptenkova, Ekaterina, Kolesnikova, Anastasia, Podplutova, Ekaterina, Sofya, Senotrusova, Sharaev, Maksim G.

arXiv.org Artificial Intelligence

Identifying disease interconnections through manual analysis of large-scale clinical data is labor-intensive, subjective, and prone to expert disagreement. While machine learning (ML) shows promise, three critical challenges remain: (1) selecting optimal methods from the vast ML landscape, (2) determining whether real-world clinical data (e.g., electronic health records, EHRs) or structured disease descriptions yield more reliable insights, (3) the lack of "ground truth," as some disease interconnections remain unexplored in medicine. Large language models (LLMs) demonstrate broad utility, yet they often lack specialized medical knowledge. Our framework integrates the following: (i) a statistical co-occurrence analysis and a masked language modeling (MLM) approach using real clinical data; (ii) domain-specific BERT variants (Med-BERT and BioClinicalBERT); (iii) a general-purpose BERT and document retrieval; and (iv) four LLMs (Mistral, DeepSeek, Qwen, and Y andexGPT). Our graph-based comparison of the obtained interconnection matrices shows that the LLM-based approach produces interconnections with the lowest diversity of ICD code connections to different diseases compared to other methods, including text-based and domain-based approaches. This suggests an important implication: LLMs have limited potential for discovering new interconnections. In the absence of ground truth databases for medical interconnections between ICD codes, our results constitute a valuable medical disease ontology that can serve as a founda-tional resource for future clinical research and artificial intelligence applications in healthcare. Electronic health records (EHRs) provide a valuable resource for studying disease progression and relationships between diagnoses. Machine learning (ML) can help discover hidden patterns in medical data, but many existing models are hard to interpret. In particular, it is not always clear whether large language models (LLMs) make predictions based on meaningful medical knowledge or simply rely on textual similarities between diagnosis descriptions (Cui et al., 2025). This is especially critical in healthcare, where model decisions must align with established medical knowledge and pathophysiological mechanisms. We also analyze and compare the obtained results and summarize it into medical disease ontology.


LEGO: Spatial Accelerator Generation and Optimization for Tensor Applications

Lin, Yujun, Zhang, Zhekai, Han, Song

arXiv.org Artificial Intelligence

Modern tensor applications, especially foundation models and generative AI applications require multiple input modalities (both vision and language), which increases the demand for flexible accelerator architecture. Existing frameworks suffer from the trade-off between design flexibility and productivity of RTL generation: either limited to very few hand-written templates or cannot automatically generate the RTL. To address this challenge, we propose the LEGO framework, which targets tensor applications and automatically generates spatial architecture design and outputs synthesizable RTL code without handwritten RTL design templates. Leveraging the affine-transformation-based architecture representation, LEGO front end finds interconnections between function units, synthesizes the memory system, and fuses different spatial dataflow designs based on data reuse analysis. LEGO back end then translates the hardware in a primitive-level graph to perform lower-level optimizations, and applies a set of linear-programming algorithms to optimally insert pipeline registers and reduce the overhead of unused logic when switching spatial dataflows. Our evaluation demonstrates that LEGO can achieve 3.2x speedup and 2.4x energy efficiency compared to previous work Gemmini, and can generate one architecture for diverse modern foundation models in generative AI applications.



humancompatible.interconnect: Testing Properties of Repeated Uses of Interconnections of AI Systems

Nazarov, Rodion, Quinn, Anthony, Shorten, Robert, Marecek, Jakub

arXiv.org Artificial Intelligence

Artificial intelligence (AI) systems often interact with multiple agents. The regulation of such AI systems often requires that {\em a priori\/} guarantees of fairness and robustness be satisfied. With stochastic models of agents' responses to the outputs of AI systems, such {\em a priori\/} guarantees require non-trivial reasoning about the corresponding stochastic systems. Here, we present an open-source PyTorch-based toolkit for the use of stochastic control techniques in modelling interconnections of AI systems and properties of their repeated uses. It models robustness and fairness desiderata in a closed-loop fashion, and provides {\em a priori\/} guarantees for these interconnections. The PyTorch-based toolkit removes much of the complexity associated with the provision of fairness guarantees for closed-loop models of multi-agent systems.


Scalable Learning of High-Dimensional Demonstrations with Composition of Linear Parameter Varying Dynamical Systems

Agrawal, Shreenabh, Kussaba, Hugo T. M., Chen, Lingyun, Binny, Allen Emmanuel, Swikir, Abdalla, Jagtap, Pushpak, Haddadin, Sami

arXiv.org Artificial Intelligence

Learning from Demonstration (LfD) techniques enable robots to learn and generalize tasks from user demonstrations, eliminating the need for coding expertise among end-users. One established technique to implement LfD in robots is to encode demonstrations in a stable Dynamical System (DS). However, finding a stable dynamical system entails solving an optimization problem with bilinear matrix inequality (BMI) constraints, a non-convex problem which, depending on the number of scalar constraints and variables, demands significant computational resources and is susceptible to numerical issues such as floating-point errors. To address these challenges, we propose a novel compositional approach that enhances the applicability and scalability of learning stable DSs with BMIs.


Towards a Small Language Model Lifecycle Framework

Miraghaei, Parsa, Moreschini, Sergio, Kolehmainen, Antti, Hästbacka, David

arXiv.org Artificial Intelligence

Benchmark suites such as MMLU and HellaSwag measure core capabilities but are vulnerable to data contamination, making careful curation and transparent reporting essential [OS21], [OS2], [OS13], [OS6]. Trustworthiness evaluation covers robustness to adversarial inputs, privacy protection, reliability (including hallucination and consistency), and safety concerns such as toxicity and bias [OS2], [OS6], all of which are vital for user-facing or high-stakes deployments. Resource efficiency--spanning computational cost, memory, energy, and deployment overhead--is particularly important for SLMs and shapes deployment strategies in constrained environments [OS5], [OS6]. Automated evaluation methods range from statistical scorers like BLEU and ROUGE to model-based and hybrid approaches, with the latter providing stronger alignment with human judgment and greater scalability [OS29], [OS30]. Ultimately, evaluation should be an integrated, continuous process that informs model iteration, balances performance with sustainability and safety, and supports real-world usability at scale.