Goto

Collaborating Authors

 ndf


Neural Unsigned Distance Fields for Implicit Function Learning

Neural Information Processing Systems

In this work we target a learnable output representation that allows continuous, high resolution outputs of arbitrary shape. Recent works represent 3D surfaces implicitly with a Neural Network, thereby breaking previous barriers in resolution, and ability to represent diverse topologies. However, neural implicit representations are limited to closed surfaces, which divide the space into inside and outside. Many real world objects such as walls of a scene scanned by a sensor, clothing, or a car with inner structures are not closed. This constitutes a significant barrier, in terms of data pre-processing (objects need to be artificially closed creating artifacts), and the ability to output open surfaces.




cars SAL[4 cars Ours Ours Ours Steps vs Error

Neural Information Processing Systems

We thank all reviewers for their useful feedback. Reviewers acknowledge that NDFs are "well founded", "simple and general, which promises wider applicability"[ There is no evidence that it should work competitively on all classes together. We find that NDF outperforms all baselines. "Is there a test/train split, and are quantitative statistics provided on the test We follow the suggestion and added training time. We also transferred key information into the main paper.



NeuralTouch: Neural Descriptors for Precise Sim-to-Real Tactile Robot Control

Lin, Yijiong, Deng, Bowen, Lu, Chenghua, Yang, Max, Psomopoulou, Efi, Lepora, Nathan F.

arXiv.org Artificial Intelligence

Abstract--Grasping accuracy is a critical prerequisite for precise object manipulation, often requiring careful alignment between the robot hand and object. Neural Descriptor Fields (NDF) offer a promising vision-based method to generate grasping poses that generalize across object categories. However, NDF alone can produce inaccurate poses due to imperfect camera calibration, incomplete point clouds, and object variability. Meanwhile, tactile sensing enables more precise contact, but existing approaches typically learn policies limited to simple, predefined contact geometries. In this work, we introduce NeuralT ouch, a multi-modal framework that integrates NDF and tactile sensing to enable accurate, generalizable grasping through gentle physical interaction. Our approach leverages NDF to implicitly represent the target contact geometry, from which a deep reinforcement learning (RL) policy is trained to refine the grasp using tactile feedback. This policy is conditioned on the neural descriptors and does not require explicit specification of contact types. Results show that NeuralT ouch significantly improves grasping accuracy and robustness over baseline methods, offering a general framework for precise, contact-rich robotic manipulation. I. INTRODUCTION A commonplace behaviour in humans is our ability to glance at an object to determine its general position and then use touch alone to grasp it with precision.


Query Circuits: Explaining How Language Models Answer User Prompts

Wu, Tung-Yu, Barez, Fazl

arXiv.org Artificial Intelligence

Explaining why a language model produces a particular output requires local, input-level explanations. Existing methods uncover global capability circuits (e.g., indirect object identification), but not why the model answers a specific input query in a particular way. We introduce query circuits, which directly trace the information flow inside a model that maps a specific input to the output. Unlike surrogate-based approaches (e.g., sparse autoencoders), query circuits are identified within the model itself, resulting in more faithful and computationally accessible explanations. To make query circuits practical, we address two challenges. First, we introduce Normalized Deviation Faithfulness (NDF), a robust metric to evaluate how well a discovered circuit recovers the model's decision for a specific input, and is broadly applicable to circuit discovery beyond our setting. Second, we develop sampling-based methods to efficiently identify circuits that are sparse yet faithfully describe the model's behavior. Across benchmarks (IOI, arithmetic, MMLU, and ARC), we find that there exist extremely sparse query circuits within the model that can recover much of its performance on single queries. For example, a circuit covering only 1.3% of model connections can recover about 60% of performance on an MMLU questions. Overall, query circuits provide a step towards faithful, scalable explanations of how language models process individual inputs.