Goto

Collaborating Authors

 localize


Neural Attribution for Semantic Bug-Localization in Student Programs

Neural Information Processing Systems

Providing feedback is an integral part of teaching. Most open online courses on programming make use of automated grading systems to support programming assignments and give real-time feedback. These systems usually rely on test results to quantify the programs' functional correctness. They return failing tests to the students as feedback. However, students may find it difficult to debug their programs if they receive no hints about where the bug is and how to fix it. In this work, we present NeuralBugLocator, a deep learning based technique, that can localize the bugs in a faulty program with respect to a failing test, without even running the program. At the heart of our technique is a novel tree convolutional neural network which is trained to predict whether a program passes or fails a given test. To localize the bugs, we analyze the trained network using a state-of-the-art neural prediction attribution technique and see which lines of the programs make it predict the test outcomes. Our experiments show that NeuralBugLocator is generally more accurate than two state-of-the-art program-spectrum based and one syntactic difference based bug-localization baselines.



We thank the reviewers for mentioning a few pioneering works [8, 58, D] that learn to localize

Neural Information Processing Systems

We did not reproduce results of [C] on SUN due to time limit (will include in final paper). Here we compare our work with these papers and clarify our novelty. Besides, they can only localize a small number of object parts, i.e., [58] for 2 parts, [D] for 4 parts, and are not able to "the employment of these ideas together for attribute localization and ZSL is quite interesting and seems to lead to The discussion will be added in the final paper. APN can obtain attention maps that better localize visual attributes compared to CAM (See Figure 1 in the main paper). CNN to predict binary attributes without prototypes.


SignLoc: Robust Localization using Navigation Signs and Public Maps

Zimmerman, Nicky, Loo, Joel, Agrawal, Ayush, Hsu, David

arXiv.org Artificial Intelligence

To localize, it matches these cues to a large-scale, indoor-outdoor navigation graph, constructed from publicly available maps. Abstract -- Navigation signs and maps, such as floor plans and street maps, are widely available and serve as ubiquitous aids for way-finding in human environments. Y et, they are rarely used by robot systems. This paper presents SignLoc, a global localization method that leverages navigation signs to localize the robot on publicly available maps--specifically floor plans and OpenStreetMap (OSM) graphs-without prior sensor-based mapping. It then employs a probabilistic observation model to match directional and locational cues from the detected signs to the graph, enabling robust topo-semantic localization within a Monte Carlo framework. We evaluated SignLoc in diverse large-scale environments: part of a university campus, a shopping mall, and a hospital complex. Experimental results show that SignLoc reliably localizes the robot after observing only one to two signs. Localizing and navigating in the open world remains a challenge for robots due to the diversity and complexity of human environments.



Spooky Action at a Distance: Normalization Layers Enable Side-Channel Spatial Communication

Pfrommer, Samuel, Ma, George, Huang, Yixiao, Sojoudi, Somayeh

arXiv.org Artificial Intelligence

This work shows that normalization layers can facilitate a surprising degree of communication across the spatial dimensions of an input tensor. We study a toy localization task with a convolutional architecture and show that normalization layers enable an iterative message passing procedure, allowing information aggregation from well outside the local receptive field. Our results suggest that normalization layers should be employed with caution in applications such as diffusion-based trajectory generation, where maintaining a spatially limited receptive field is crucial.


Low-Cost Infrastructure-Free 3D Relative Localization with Sub-Meter Accuracy in Near Field

Gao, Qiangsheng, Cheng, Ka Ho, Qiu, Li, Gong, Zijun

arXiv.org Artificial Intelligence

--Relative localization in the near-field scenario is critically important for unmanned vehicle (UxV) applications. Although related works addressing 2D relative localization problem have been widely studied for unmanned ground vehicles (UGVs), the problem in 3D scenarios for unmanned aerial vehicles (UA Vs) involves more uncertainties and remains to be investigated. Inspired by the phenomenon that animals can achieve swarm behaviors solely based on individual perception of relative information, this study proposes an infrastructure-free 3D relative localization framework that relies exclusively on onboard ultra-wideband (UWB) sensors. Leveraging 2D relative positioning research, we conducted feasibility analysis, system modeling, simulations, performance evaluation, and field tests using UWB sensors. The key contributions of this work include: derivation of the Cram er-Rao lower bound (CRLB) and geometric dilution of precision (GDOP) for near-field scenarios; development of two localization algorithms - one based on Euclidean distance matrix (EDM) and another employing maximum likelihood estimation (MLE); comprehensive performance comparison and computational complexity analysis against state-of-the-art methods; simulation studies and field experiments; a novel sensor deployment strategy inspired by animal behavior, enabling single-sensor implementation within the proposed framework for UxV applications. The theoretical, simulation, and experimental results demonstrate strong generalizability to other 3D near-field localization tasks, with significant potential for a cost-effective cross-platform UxV collaborative system. I. INTRODUCTION Precise localization is essential in diverse domains, including multi-agent robotic systems, the Internet of Things, intelligent vehicular networks, and logistics [1]-[3].


Deep Learning-based Alignment Measurement in Knee Radiographs

Hu, Zhisen, Cullen, Dominic, Thompson, Peter, Johnson, David, Bian, Chang, Tiulpin, Aleksei, Cootes, Timothy, Lindner, Claudia

arXiv.org Artificial Intelligence

Radiographic knee alignment (KA) measurement is important for predicting joint health and surgical outcomes after total knee replacement. Traditional methods for KA measurements are manual, time-consuming and require long-leg radiographs. This study proposes a deep learning-based method to measure KA in anteroposterior knee radiographs via automatically localized knee anatomical landmarks. Our method builds on hourglass networks and incorporates an attention gate structure to enhance robustness and focus on key anatomical features. To our knowledge, this is the first deep learning-based method to localize over 100 knee anatomical landmarks to fully outline the knee shape while integrating KA measurements on both pre-operative and post-operative images. It provides highly accurate and reliable anatomical varus/valgus KA measurements using the anatomical tibiofemoral angle, achieving mean absolute differences ~1° when compared to clinical ground truth measurements. Agreement between automated and clinical measurements was excellent pre-operatively (intra-class correlation coefficient (ICC) = 0.97) and good post-operatively (ICC = 0.86). Our findings demonstrate that KA assessment can be automated with high accuracy, creating opportunities for digitally enhanced clinical workflows.


Towards a Formal Specification for Self-organized Shape Formation in Swarm Robotics

Darr, YR, Niazi, MA

arXiv.org Artificial Intelligence

The self-organization of robots for the formation of structures and shapes is a stimulating application of the swarm robotic system. It involves a large number of autonomous robots of heterogeneous behavior, coordination among them, and their interaction with the dynamic environment. This process of complex structure formation is considered a complex system, which needs to be modeled by using any modeling approach. Although the formal specification approach along with other formal methods has been used to model the behavior of robots in a swarm. However, to the best of our knowledge, the formal specification approach has not been used to model the self-organization process in swarm robotic systems for shape formation. In this paper, we use a formal specification approach to model the shape formation task of swarm robots. We use Z (Zed) language of formal specification, which is a state-based language, to model the states of the entities of the systems. We demonstrate the effectiveness of Z for the self-organized shape formation. The presented formal specification model gives the outlines for designing and implementing the swarm robotic system for the formation of complex shapes and structures. It also provides the foundation for modeling the complex shape formation process for swarm robotics using a multi-agent system in a simulation-based environment. Keywords: Swarm robotics, Self-organization, Formal specification, Complex systems


Auditory Localization and Assessment of Consequential Robot Sounds: A Multi-Method Study in Virtual Reality

Wessels, Marlene, de Heuvel, Jorge, Müller, Leon, Maier, Anna Luisa, Bennewitz, Maren, Kraus, Johannes

arXiv.org Artificial Intelligence

-- Mobile robots increasingly operate alongside humans but are often out of sight, so that humans need to rely on the sounds of the robots to recognize their presence. For successful human-robot interaction (HRI), it is therefore crucial to understand how humans perceive robots by their consequential sounds, i.e., operating noise. Prior research suggests that the sound of a quadruped Go1 is more detectable than that of a wheeled T urtlebot. This study builds on this and examines the human ability to localize consequential sounds of three robots (quadruped Go1, wheeled T urtlebot 2i, wheeled HSR) in Virtual Reality. In a within-subjects design, we assessed participants' localization performance for the robots with and without an acoustic vehicle alerting system (A V AS) for two velocities (0.3, 0.8 m/s) and two trajectories (head-on, radial). In each trial, participants were presented with the sound of a moving robot for 3 s and were tasked to point at its final position (localization task). Localization errors were measured as the absolute angular difference between the participants' estimated and the actual robot position. Results showed that the robot type significantly influenced the localization accuracy and precision, with the sound of the wheeled HSR (especially without A V AS) performing worst under all experimental conditions. Surprisingly, participants rated the HSR sound as more positive, less annoying, and more trustworthy than the T urtlebot and Go1 sound. This reveals a tension between subjective evaluation and objective auditory localization performance. Our findings highlight consequential robot sounds as a critical factor for designing intuitive and effective HRI, with implications for human-centered robot design and social navigation.