Goto

Collaborating Authors

 malfunction


Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability

Neural Information Processing Systems

Conventional saliency maps highlight input features to which neural network predictions are highly sensitive. We take a different approach to saliency, in which we identify and analyze the network parameters, rather than inputs, which are responsible for erroneous decisions. We first verify that identified salient parameters are indeed responsible for misclassification by showing that turning these parameters off improves predictions on the associated samples more than turning off the same number of random or least salient parameters. We further validate the link between salient parameters and network misclassification errors by observing that fine-tuning a small number of the most salient parameters on a single sample results in error correction on other samples which were misclassified for similar reasons -- nearest neighbors in the saliency space. After validating our parameter-space saliency maps, we demonstrate that samples which cause similar parameters to malfunction are semantically similar. Further, we introduce an input-space saliency counterpart which reveals how image features cause specific network components to malfunction.


Don't Reach for the Stars: Rethinking Topology for Resilient Federated Learning

Konstantin, Mirko, Mukhopadhyay, Anirban

arXiv.org Artificial Intelligence

Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy by keeping data local. Traditional FL approaches rely on a centralized, star-shaped topology, where a central server aggregates model updates from clients. However, this architecture introduces several limitations, including a single point of failure, limited personalization, and poor robustness to distribution shifts or vulnerability to malfunctioning clients. Moreover, update selection in centralized FL often relies on low-level parameter differences, which can be unreliable when client data is not independent and identically distributed, and offer clients little control. In this work, we propose a decentralized, peer-to-peer (P2P) FL framework. It leverages the flexibility of the P2P topology to enable each client to identify and aggregate a personalized set of trustworthy and beneficial updates.This framework is the Local Inference Guided Aggregation for Heterogeneous Training Environments to Yield Enhancement Through Agreement and Regularization (LIGHTYEAR). Central to our method is an agreement score, computed on a local validation set, which quantifies the semantic alignment of incoming updates in the function space with respect to the clients reference model. Each client uses this score to select a tailored subset of updates and performs aggregation with a regularization term that further stabilizes the training. Our empirical evaluation across five datasets shows that the proposed approach consistently outperforms both, centralized baselines and existing P2P methods in terms of client-level performance, particularly under adversarial and heterogeneous conditions.


Appendix

Neural Information Processing Systems

The supplementary document is organized as follows: Sec. A depicts the detailed network architectures of our adaptive module in FPN. B provides the implementation details of BEVFusion. D discusses the performance gain based on the object distance range. E provides the latency and memory footprint of BEVFusion.


'He lives for the goals' - robot Haaland returns from malfunction

BBC News

'He lives for the goals' - robot Haaland returns from malfunction Is Erling Haaland a big fan of Peter Crouch - or is he actually programmed like a robot? That may - or not be - a question posed after Manchester City's impressive Premier League victory over in-form Bournemouth on Sunday. The Norway striker malfunctioned for only the second time this season when he failed to score in last weekend's loss at Aston Villa, but he was back to being a goal machine with a ruthlessly efficient first-half double against the Cherries. If he is hiding any nuts and bolts under those blonde locks of his, Haaland did prove he was still human by missing a couple of chances to complete his hat-trick. But his scary statistics this season have left many in awe of the 25-year-old's prowess in front of goal, prompting a robot dance to mark his opener in the win that took his side up to second place.



Fault Tolerant Multi-Agent Learning with Adversarial Budget Constraints

Mguni, David, Sun, Yaqi, Chen, Haojun, Darabi, Amir, Orimoloye, Larry Olanrewaju, Yang, Yaodong

arXiv.org Artificial Intelligence

In multi-agent systems, the safe and reliable execution of tasks often depends on agents correctly coordinating their actions. However, in real-world deployments, failures of computational components are inevitable, presenting a critical challenge: ensuring that multi-agent reinforcement learning (MARL) policies remain effective even when some agents malfunction. We propose the Multi-Agent Robust Training Algorithm (MARTA), a plug-and-play framework for training MARL agents to be resilient to potentially severe faults. MARTA operates in cooperative multi-agent settings where agents may lose the ability to execute their intended actions. It learns to identify failure scenarios that are especially detrimental to system performance and equips agents with strategies to mitigate their impact. At the heart of MARTA is a novel adversarial Markov game in which an adversary -- modelled via \emph{Markov switching controls} -- learns to disable agents in high-risk state regions, while the remaining agents are trained to \emph{jointly} best-respond to such targeted malfunctions. To ensure practicality, MARTA enforces a malfunction budget, constraining the adversary to a fixed number of failures and learning robust policies accordingly. We provide theoretical guarantees that MARTA converges to a Markov perfect equilibrium, ensuring agents optimally counteract worst-case faults. Empirically, we show that MARTA achieves state-of-the-art fault-tolerant performance across benchmark environments, including Multi-Agent Particle World and Level-Based Foraging.


Disentangling AI Alignment: A Structured Taxonomy Beyond Safety and Ethics

Baum, Kevin

arXiv.org Artificial Intelligence

Recent advances in AI research make it increasingly plausible that artificial agents with consequential real-world impact will soon operate beyond tightly controlled environments. Ensuring that these agents are not only safe but that they adhere to broader normative expectations is thus an urgent interdisciplinary challenge. Multiple fields -- notably AI Safety, AI Alignment, and Machine Ethics -- claim to contribute to this task. However, the conceptual boundaries and interrelations among these domains remain vague, leaving researchers without clear guidance in positioning their work. To address this meta-challenge, we develop a structured conceptual framework for understanding AI alignment. Rather than focusing solely on alignment goals, we introduce a taxonomy distinguishing the alignment aim (safety, ethicality, legality, etc.), scope (outcome vs. execution), and constituency (individual vs. collective). This structural approach reveals multiple legitimate alignment configurations, providing a foundation for practical and philosophical integration across domains, and clarifying what it might mean for an agent to be aligned all-things-considered.


Humanoid robot malfunctions, sparks viral panic

FOX News

Kurt Knutsson talks about a viral video that shows a humanoid robot going wild. A chilling video circulating on social media has reignited old anxieties about robots turning against their creators. The footage shows a Unitree H1 humanoid robot, a machine about the size of an adult human, suddenly flailing its arms and legs with alarming force during a test, coming dangerously close to two technicians. The scene has sparked heated debate about the safety of advanced robotics and artificial intelligence. But is this truly the beginning of something out of our worst fears, or is there just a straightforward technical explanation for what happened?


Reliability-Driven LiDAR-Camera Fusion for Robust 3D Object Detection

Sadeghian, Reza, Hooshyaripour, Niloofar, Joslin, Chris, Lee, WonSook

arXiv.org Artificial Intelligence

Accurate and robust 3D object detection is essential for autonomous driving, where fusing data from sensors like LiDAR and camera enhances detection accuracy. However, sensor malfunctions such as corruption or disconnection can degrade performance, and existing fusion models often struggle to maintain reliability when one modality fails. To address this, we propose ReliFusion, a novel LiDAR-camera fusion framework operating in the bird's-eye view (BEV) space. ReliFusion integrates three key components: the Spatio-Temporal Feature Aggregation (STFA) module, which captures dependencies across frames to stabilize predictions over time; the Reliability module, which assigns confidence scores to quantify the dependability of each modality under challenging conditions; and the Confidence-Weighted Mutual Cross-Attention (CW-MCA) module, which dynamically balances information from LiDAR and camera modalities based on these confidence scores. Experiments on the nuScenes dataset show that ReliFusion significantly outperforms state-of-the-art methods, achieving superior robustness and accuracy in scenarios with limited LiDAR fields of view and severe sensor malfunctions.


Advances in Multi-agent Reinforcement Learning: Persistent Autonomy and Robot Learning Lab Report 2024

Azadeh, Reza

arXiv.org Artificial Intelligence

Multi-Agent Reinforcement Learning (MARL) approaches have emerged as popular solutions to address the general challenges of cooperation in multi-agent environments, where the success of achieving shared or individual goals critically depends on the coordination and collaboration between agents. However, existing cooperative MARL methods face several challenges intrinsic to multi-agent systems, such as the curse of dimensionality, non-stationarity, and the need for a global exploration strategy. Moreover, the presence of agents with constraints (e.g., limited battery life, restricted mobility) or distinct roles further exacerbates these challenges. This document provides an overview of recent advances in Multi-Agent Reinforcement Learning (MARL) conducted at the Persistent Autonomy and Robot Learning (PeARL) lab at the University of Massachusetts Lowell. We briefly discuss various research directions and present a selection of approaches proposed in our most recent publications. For each proposed approach, we also highlight potential future directions to further advance the field.