Goto

Collaborating Authors

 gel


Benchmarking Resilience and Sensitivity of Polyurethane-Based Vision-Based Tactile Sensors

Davis, Benjamin, Stuart, Hannah

arXiv.org Artificial Intelligence

Vision-based tactile sensors (VBTSs) are a promising technology for robots, providing them with dense signals that can be translated into an understanding of normal and shear load, contact region, texture classification, and more. However, existing VBTS tactile surfaces make use of silicone gels, which provide high sensitivity but easily deteriorate from loading and surface wear. We propose that polyurethane rubber, used for high-load applications like shoe soles, rubber wheels, and industrial gaskets, may provide improved physical gel resilience, potentially at the cost of sensitivity. To compare the resilience and sensitivity of silicone and polyurethane VBTS gels, we propose a series of standard evaluation benchmarking protocols. Our resilience tests assess sensor durability across normal loading, shear loading, and abrasion. For sensitivity, we introduce model-free assessments of force and spatial sensitivity to directly measure the physical capabilities of each gel without effects introduced from data and model quality. Finally, we include a bottle cap loosening and tightening demonstration as an example where polyurethane gels provide an advantage over their silicone counterparts.


Graph Evidential Learning for Anomaly Detection

Wei, Chunyu, Hu, Wenji, Hao, Xingjia, Wang, Yunhai, Chen, Yueguo, Bai, Bing, Wang, Fei

arXiv.org Artificial Intelligence

Graph anomaly detection faces significant challenges due to the scarcity of reliable anomaly-labeled datasets, driving the development of unsupervised methods. Graph autoencoders (GAEs) have emerged as a dominant approach by reconstructing graph structures and node features while deriving anomaly scores from reconstruction errors. However, relying solely on reconstruction error for anomaly detection has limitations, as it increases the sensitivity to noise and overfitting. To address these issues, we propose Graph Evidential Learning (GEL), a probabilistic framework that redefines the reconstruction process through evidential learning. By modeling node features and graph topology using evidential distributions, GEL quantifies two types of uncertainty: graph uncertainty and reconstruction uncertainty, incorporating them into the anomaly scoring mechanism. Extensive experiments demonstrate that GEL achieves state-of-the-art performance while maintaining high robustness against noise and structural perturbations.


UltraGelBot: Autonomous Gel Dispenser for Robotic Ultrasound

Raina, Deepak, Zhao, Ziming, Voyles, Richard, Wachs, Juan, Saha, Subir K., Chandrashekhara, S. H.

arXiv.org Artificial Intelligence

However, in these robotic systems, the gel is still model highlighting the components of the UltraGelBot applied by the human attendant. This human intervention between the procedure results in the acquisition of suboptimal The syringe is constrained by the housing to stabilize images due to inappropriate acoustic coupling its linear movement. The syringe has a piston, which [7]. Further, the procedure time also increases as RUS is connected to the moving end of the linear actuator needs to be halted several times in between the procedure (Actuonix L16-P). It will move to compress the piston for manual gel application. Moreover, the presence inside the reservoir and dispense the gel. For refilling, of humans near the patient's vicinity did not achieve the the syringe needs to be detached from the assembly complete safety of the operator, as promised by RUS. and refilled using a commercially used container. Larger Thus, an automated method for dispensing of ultrasound reservoirs may be included in the design in order to gel is the immediate requirement of RUS.


Glassy gel is hard as plastic and stretches 7 times its length

New Scientist

When you think of gel, you might imagine goo – but a new gel-like material has been engineered to be soft enough to stretch to almost seven times its original length while still being strong and clear, like glass. Michael Dickey at North Carolina State University says his team discovered these "glassy gels" when his student, Meixiang Wang, was experimenting with ionic liquids and kept finding unexpected mechanical properties. The materials they devised are more than 50 per cent liquid, but as strong as the plastics used for water bottles, while also being very stretchy and sticky. "There are a bunch of cool things about them," he says. A hydrogen fuel revolution is coming – here's why we might not want it Each glassy gel consists of long molecules called polymers mixed with an ionic liquid, a fluid that is essentially a salt in liquid form.


BOK-VQA: Bilingual Outside Knowledge-based Visual Question Answering via Graph Representation Pretraining

Kim, Minjun, Song, Seungwoo, Lee, Youhan, Jang, Haneol, Lim, Kyungtae

arXiv.org Artificial Intelligence

The current research direction in generative models, such as the recently developed GPT4, aims to find relevant knowledge information for multimodal and multilingual inputs to provide answers. Under these research circumstances, the demand for multilingual evaluation of visual question answering (VQA) tasks, a representative task of multimodal systems, has increased. Accordingly, we propose a bilingual outside-knowledge VQA (BOK-VQA) dataset in this study that can be extended to multilingualism. The proposed data include 17K images, 17K question-answer pairs for both Korean and English and 280K instances of knowledge information related to question-answer content. We also present a framework that can effectively inject knowledge information into a VQA system by pretraining the knowledge information of BOK-VQA data in the form of graph embeddings. Finally, through in-depth analysis, we demonstrated the actual effect of the knowledge information contained in the constructed training data on VQA.


Boosting Federated Learning in Resource-Constrained Networks

Boukhari, Mohamed Yassine, Dhasade, Akash, Kermarrec, Anne-Marie, Pires, Rafael, Safsafi, Othmane, Sharma, Rishi

arXiv.org Artificial Intelligence

Federated learning (FL) enables a set of client devices to collaboratively train a model without sharing raw data. This process, though, operates under the constrained computation and communication resources of edge devices. These constraints combined with systems heterogeneity force some participating clients to perform fewer local updates than expected by the server, thus slowing down convergence. Exhaustive tuning of hyperparameters in FL, furthermore, can be resource-intensive, without which the convergence is adversely affected. In this work, we propose GeL, the guess and learn algorithm. GeL enables constrained edge devices to perform additional learning through guessed updates on top of gradient-based steps. These guesses are gradientless, i.e., participating clients leverage them for free. Our generic guessing algorithm (i) can be flexibly combined with several state-of-the-art algorithms including FedProx, FedNova or FedYogi; and (ii) achieves significantly improved performance when the learning rates are not best tuned. We conduct extensive experiments and show that GeL can boost empirical convergence by up to 40% in resource-constrained networks while relieving the need for exhaustive learning rate tuning.


Evetac: An Event-based Optical Tactile Sensor for Robotic Manipulation

Funk, Niklas, Helmut, Erik, Chalvatzaki, Georgia, Calandra, Roberto, Peters, Jan

arXiv.org Artificial Intelligence

Optical tactile sensors have recently become popular. They provide high spatial resolution, but struggle to offer fine temporal resolutions. To overcome this shortcoming, we study the idea of replacing the RGB camera with an event-based camera and introduce a new event-based optical tactile sensor called Evetac. Along with hardware design, we develop touch processing algorithms to process its measurements online at 1000 Hz. We devise an efficient algorithm to track the elastomer's deformation through the imprinted markers despite the sensor's sparse output. Benchmarking experiments demonstrate Evetac's capabilities of sensing vibrations up to 498 Hz, reconstructing shear forces, and significantly reducing data rates compared to RGB optical tactile sensors. Moreover, Evetac's output and the marker tracking provide meaningful features for learning data-driven slip detection and prediction models. The learned models form the basis for a robust and adaptive closed-loop grasp controller capable of handling a wide range of objects. We believe that fast and efficient event-based tactile sensors like Evetac will be essential for bringing human-like manipulation capabilities to robotics. The sensor design is open-sourced at https://sites.google.com/view/evetac .


DenseTact-Mini: An Optical Tactile Sensor for Grasping Multi-Scale Objects From Flat Surfaces

Do, Won Kyung, Dhawan, Ankush Kundan, Kitzmann, Mathilda, Kennedy, Monroe III

arXiv.org Artificial Intelligence

Abstract-- Dexterous manipulation, especially of small daily objects, continues to pose complex challenges in robotics. This paper introduces the DenseTact-Mini, an optical tactile sensor with a soft, rounded, smooth gel surface and compact design equipped with a synthetic fingernail. We propose three distinct grasping strategies: tap grasping using adhesion forces such as electrostatic and van der Waals, fingernail grasping leveraging rolling/sliding contact between the object and fingernail, and fingertip grasping with two soft fingertips. Through comprehensive evaluations, the DenseTact-Mini demonstrates a lifting success rate exceeding 90.2% when grasping various objects, spanning items from 1 mm basil seeds and small paperclips to items nearly 15mm. This work demonstrates the potential of soft optical tactile sensors for dexterous manipulation and grasping.


AGNN: Alternating Graph-Regularized Neural Networks to Alleviate Over-Smoothing

Chen, Zhaoliang, Wu, Zhihao, Lin, Zhenghong, Wang, Shiping, Plant, Claudia, Guo, Wenzhong

arXiv.org Artificial Intelligence

Graph Convolutional Network (GCN) with the powerful capacity to explore graph-structural data has gained noticeable success in recent years. Nonetheless, most of the existing GCN-based models suffer from the notorious over-smoothing issue, owing to which shallow networks are extensively adopted. This may be problematic for complex graph datasets because a deeper GCN should be beneficial to propagating information across remote neighbors. Recent works have devoted effort to addressing over-smoothing problems, including establishing residual connection structure or fusing predictions from multi-layer models. Because of the indistinguishable embeddings from deep layers, it is reasonable to generate more reliable predictions before conducting the combination of outputs from various layers. In light of this, we propose an Alternating Graph-regularized Neural Network (AGNN) composed of Graph Convolutional Layer (GCL) and Graph Embedding Layer (GEL). GEL is derived from the graph-regularized optimization containing Laplacian embedding term, which can alleviate the over-smoothing problem by periodic projection from the low-order feature space onto the high-order space. With more distinguishable features of distinct layers, an improved Adaboost strategy is utilized to aggregate outputs from each layer, which explores integrated embeddings of multi-hop neighbors. The proposed model is evaluated via a large number of experiments including performance comparison with some multi-layer or multi-order graph neural networks, which reveals the superior performance improvement of AGNN compared with state-of-the-art models.


DenseTact 2.0: Optical Tactile Sensor for Shape and Force Reconstruction

Do, Won Kyung, Jurewicz, Bianca, Kennedy, Monroe III

arXiv.org Artificial Intelligence

Abstract-- Collaborative robots stand to have an immense impact on both human welfare in domestic service applications and industrial superiority in advanced manufacturing with dexterous assembly. The outstanding challenge is providing robotic fingertips with a physical design that makes them adept at performing dexterous tasks that require high-resolution, calibrated shape reconstruction and force sensing. In this work, we present DenseTact 2.0, an optical-tactile sensor capable of visualizing the deformed surface of a soft fingertip and using that image in a neural network to perform both calibrated shape reconstruction and 6-axis wrench estimation. We demonstrate the sensor accuracy of 0.3633mm per pixel for shape reconstruction, 0.410N for forces, 0.387N mm for torques, and the ability to calibrate new fingers through transfer learning, which achieves comparable performance with only 12% of the non-transfer learning dataset size. I. INTRODUCTION Robots must be able to manipulate objects with dexterity comparable to human performance in order to be effective Figure 1: DenseTact 2.0. Sensors are pinching ATI Nano collaborators in environments designed for humans.