Goto

Collaborating Authors

 displacement


Learning Elastic Costs to Shape Monge Displacements

Neural Information Processing Systems

Given a source and a target probability measure, the Monge problem studies efficient ways to map the former onto the latter. This efficiency is quantified by defining a cost function between source and target data.





Learning Composable Energy Surrogates for PDE Order Reduction

Neural Information Processing Systems

To address this, we leverage parametric modular structure to learn component-level surrogates, enabling cheaper high-fidelity simulation. We use a neural network to model the stored potential energy in a component given boundary conditions.


positive feedback, and greatly appreciate the critical and constructive suggestions

Neural Information Processing Systems

Thank you for your valuable feedback, which is very helpful in improving the paper. We're encouraged by the broadly "Put this in the context of other work on computational homogenization / multi-scale finite element Our method is related to these and the boundary element method (BEM). "Limitation associated with micro-scale buckling... the coarse-grain behavior might exhibit hysteretic effects": Good "How sensitive is the outer optimization to the accuracy of the surrogate gradients?" "Do you know how the CES method scales with system size in terms of accuracy and evaluation time": In terms of "the method to solve the outer optimization over BCs to find minimum energy solutions to the composed surrogates Free DoFs are optimized to minimize total predicted energy using LBFGS. "The discuss of the surrogate and i.i.d. "Are the BCs shared when a boundary is common between two cells": Y es. We have 1 DoF for each blue point in Fig 2. "Its not clear how the HMC and PDE solver are used together": HMC is used to generate training BCs, preferring larger The PDE solver is used to compute the gradient of the pdf (which depends on E) w.r.t. the BC. Given BCs, we run the solver to determine the internal u and E. We compute dE/dBC with the Then we use this to compute the gradient of the pdf w.r.t. the BCs, needed for the leapfrog step. "does the HMC require a significant burn-in time before producing reasonable samples": No. Note: we don't truly care Per appendix, HMC took between 3 and 100 leapfrog steps per sample. The process of using the surrogates to solve the original problem can be explained in more detail. Newton method is neither the fast nor the most stable... a comparison with more sophisticated methods would be From a brief look it looks like Liu et al's method is tailored for Reviewer 5: "There is one outlier in L2 compression that was quite bad": We will discuss this in the main paper. "A comment might help the reader situate this work within the more usual (less idyllic) context of approximating This is a good suggestion: we will relate to other work in learning energies.


Self-Adaptive Motion Tracking against On-body Displacement of Flexible Sensors

Neural Information Processing Systems

Flexible sensors are promising for ubiquitous sensing of human status due to their flexibility and easy integration as wearable systems. However, on-body displacement of sensors is inevitable since the device cannot be firmly worn at a fixed position across different sessions. This displacement issue causes complicated patterns and significant challenges to subsequent machine learning algorithms. Our work proposes a novel self-adaptive motion tracking network to address this challenge. Our network consists of three novel components: i) a light-weight learnable Affine Transformation layer whose parameters can be tuned to efficiently adapt to unknown displacements; ii) a Fourier-encoded LSTM network for better pattern identification; iii) a novel sequence discrepancy loss equipped with auxiliary regressors for unsupervised tuning of Affine Transformation parameters.


Flexible mapping of abstract domains by grid cells via self-supervised extraction and projection of generalized velocity signals

Neural Information Processing Systems

Grid cells in the medial entorhinal cortex create remarkable periodic maps of explored space during navigation. Recent studies show that they form similar maps of abstract cognitive spaces. Examples of such abstract environments include auditory tone sequences in which the pitch is continuously varied or images in which abstract features are continuously deformed (e.g., a cartoon bird whose legs stretch and shrink).


MagicSkin: Balancing Marker and Markerless Modes in Vision-Based Tactile Sensors with a Translucent Skin

Tijani, Oluwatimilehin, Chen, Zhuo, Deng, Jiankang, Luo, Shan

arXiv.org Artificial Intelligence

Vision-based tactile sensors (VBTS) face a fundamental trade-off in marker and markerless design on the tactile skin: opaque ink markers enable measurement of force and tangential displacement but completely occlude geometric features necessary for object and texture classification, while markerless skin preserves surface details but struggles in measuring tangential displacements effectively. Current practice to solve the above problem via UV lighting or virtual transfer using learning-based models introduces hardware complexity or computing burdens. This paper introduces MagicSkin, a novel tactile skin with translucent, tinted markers balancing the modes of marker and markerless for VBTS. It enables simultaneous tangential displacement tracking, force prediction, and surface detail preservation. This skin is easy to plug into GelSight-family sensors without requiring additional hardware or software tools. We comprehensively evaluate MagicSkin in downstream tasks. The translucent markers impressively enhance rather than degrade sensing performance compared with traditional markerless and inked marker design: it achieves best performance in object classification (99.17\%), texture classification (93.51\%), tangential displacement tracking (97\% point retention) and force prediction (66\% improvement in total force error). These experimental results demonstrate that translucent skin eliminates the traditional performance trade-off in marker or markerless modes, paving the way for multimodal tactile sensing essential in tactile robotics. See videos at this \href{https://zhuochenn.github.io/MagicSkin_project/}{link}.


Humanity in the Age of AI: Reassessing 2025's Existential-Risk Narratives

Louadi, Mohamed El

arXiv.org Artificial Intelligence

Two 2025 publications, "AI 2027" (Kokotajlo et al., 2025) and "If Anyone Builds It, Everyone Dies" (Yudkowsky & Soares, 2025), assert that superintelligent artificial intelligence will almost certainly destroy or render humanity obsolete within the next decade. Both rest on the classic chain formulated by Good (1965) and Bostrom (2014): intelligence explosion, superintelligence, lethal misalignment. This article subjects each link to the empirical record of 2023-2025. Sixty years after Good's speculation, none of the required phenomena (sustained recursive self-improvement, autonomous strategic awareness, or intractable lethal misalignment) have been observed. Current generative models remain narrow, statistically trained artefacts: powerful, opaque, and imperfect, but devoid of the properties that would make the catastrophic scenarios plausible. Following Whittaker (2025a, 2025b, 2025c) and Zuboff (2019, 2025), we argue that the existential-risk thesis functions primarily as an ideological distraction from the ongoing consolidation of surveillance capitalism and extreme concentration of computational power. The thesis is further inflated by the 2025 AI speculative bubble, where trillions in investments in rapidly depreciating "digital lettuce" hardware (McWilliams, 2025) mask lagging revenues and jobless growth rather than heralding superintelligence. The thesis remains, in November 2025, a speculative hypothesis amplified by a speculative financial bubble rather than a demonstrated probability.