Goto

Collaborating Authors

 Huber, Manfred


FlowMP: Learning Motion Fields for Robot Planning with Conditional Flow Matching

arXiv.org Artificial Intelligence

Prior flow matching methods in robotics have primarily learned velocity fields to morph one distribution of trajectories into another. In this work, we extend flow matching to capture second-order trajectory dynamics, incorporating acceleration effects either explicitly in the model or implicitly through the learning objective. Unlike diffusion models, which rely on a noisy forward process and iterative denoising steps, flow matching trains a continuous transformation (flow) that directly maps a simple prior distribution to the target trajectory distribution without any denoising procedure. By modeling trajectories with second-order dynamics, our approach ensures that generated robot motions are smooth and physically executable, avoiding the jerky or dynamically infeasible trajectories that first-order models might produce. We empirically demonstrate that this second-order conditional flow matching yields superior performance on motion planning benchmarks, achieving smoother trajectories and higher success rates than baseline planners. These findings highlight the advantage of learning acceleration-aware motion fields, as our method outperforms existing motion planning methods in terms of trajectory quality and planning success.


Volumetric Mapping with Panoptic Refinement via Kernel Density Estimation for Mobile Robots

arXiv.org Artificial Intelligence

Reconstructing three-dimensional (3D) scenes with semantic understanding is vital in many robotic applications. Robots need to identify which objects, along with their positions and shapes, to manipulate them precisely with given tasks. Mobile robots, especially, usually use lightweight networks to segment objects on RGB images and then localize them via depth maps; however, they often encounter out-of-distribution scenarios where masks over-cover the objects. In this paper, we address the problem of panoptic segmentation quality in 3D scene reconstruction by refining segmentation errors using non-parametric statistical methods. To enhance mask precision, we map the predicted masks into a depth frame to estimate their distribution via kernel densities. The outliers in depth perception are then rejected without the need for additional parameters in an adaptive manner to out-of-distribution scenarios, followed by 3D reconstruction using projective signed distance functions (SDFs). We validate our method on a synthetic dataset, which shows improvements in both quantitative and qualitative results for panoptic mapping. Through real-world testing, the results furthermore show our method's capability to be deployed on a real-robot system. Our source code is available at: https://github.com/mkhangg/refined panoptic mapping.


Weakly Supervised Multi-Task Representation Learning for Human Activity Analysis Using Wearables

arXiv.org Artificial Intelligence

Sensor data streams from wearable devices and smart environments are widely studied in areas like human activity recognition (HAR), person identification, or health monitoring. However, most of the previous works in activity and sensor stream analysis have been focusing on one aspect of the data, e.g. only recognizing the type of the activity or only identifying the person who performed the activity. We instead propose an approach that uses a weakly supervised multi-output siamese network that learns to map the data into multiple representation spaces, where each representation space focuses on one aspect of the data. The representation vectors of the data samples are positioned in the space such that the data with the same semantic meaning in that aspect are closely located to each other. Therefore, as demonstrated with a set of experiments, the trained model can provide metrics for clustering data based on multiple aspects, allowing it to address multiple tasks simultaneously and even to outperform single task supervised methods in many situations. In addition, further experiments are presented that in more detail analyze the effect of the architecture and of using multiple tasks within this framework, that investigate the scalability of the model to include additional tasks, and that demonstrate the ability of the framework to combine data for which only partial relationship information with respect to the target tasks is available.


Unsupervised Embedding Learning for Human Activity Recognition Using Wearable Sensor Data

arXiv.org Artificial Intelligence

The embedded sensors in widely used smartphones and other wearable devices make the data of human activities more accessible. However, recognizing different human activities from the wearable sensor data remains a challenging research problem in ubiquitous computing. One of the reasons is that the majority of the acquired data has no labels. In this paper, we present an unsupervised approach, which is based on the nature of human activity, to project the human activities into an embedding space in which similar activities will be located closely together. Using this, subsequent clustering algorithms can benefit from the embeddings, forming behavior clusters that represent the distinct activities performed by a person. Results of experiments on three labeled benchmark datasets demonstrate the effectiveness of the framework and show that our approach can help the clustering algorithm achieve improved performance in identifying and categorizing the underlying human activities compared to unsupervised techniques applied directly to the original data set.


Siamese Networks for Weakly Supervised Human Activity Recognition

arXiv.org Artificial Intelligence

Deep learning has been successfully applied to human activity recognition. However, training deep neural networks requires explicitly labeled data which is difficult to acquire. In this paper, we present a model with multiple siamese networks that are trained by using only the information about the similarity between pairs of data samples without knowing the explicit labels. The trained model maps the activity data samples into fixed size representation vectors such that the distance between the vectors in the representation space approximates the similarity of the data samples in the input space. Thus, the trained model can work as a metric for a wide range of different clustering algorithms. The training process minimizes a similarity loss function that forces the distance metric to be small for pairs of samples from the same kind of activity, and large for pairs of samples from different kinds of activities. We evaluate the model on three datasets to verify its effectiveness in segmentation and recognition of continuous human activity sequences.


ExtPerFC: An Efficient 2D and 3D Perception Hardware-Software Framework for Mobile Cobot

arXiv.org Artificial Intelligence

As the reliability of the robot's perception correlates with the number of integrated sensing modalities to tackle uncertainty, a practical solution to manage these sensors from different computers, operate them simultaneously, and maintain their real-time performance on the existing robotic system with minimal effort is needed. In this work, we present an end-to-end software-hardware framework, namely ExtPerFC, that supports both conventional hardware and software components and integrates machine learning object detectors without requiring an additional dedicated graphic processor unit (GPU). We first design our framework to achieve real-time performance on the existing robotic system, guarantee configuration optimization, and concentrate on code reusability. We then mathematically model and utilize our transfer learning strategies for 2D object detection and fuse them into depth images for 3D depth estimation. Lastly, we systematically test the proposed framework on the Baxter robot with two 7-DOF arms, a four-wheel mobility base, and an Intel RealSense D435i RGB-D camera. The results show that the robot achieves real-time performance while executing other tasks (e.g., map building, localization, navigation, object detection, arm moving, and grasping) simultaneously with available hardware like Intel onboard CPUS/GPUs on distributed computers. Also, to comprehensively control, program, and monitor the robot system, we design and introduce an end-user application. The source code is available at https://github.com/tuantdang/perception_framework.


Generalized Reinforcement Learning: Experience Particles, Action Operator, Reinforcement Field, Memory Association, and Decision Concepts

arXiv.org Artificial Intelligence

Learning a control policy capable of adapting to time-varying and potentially evolving system dynamics has been a great challenge to the mainstream reinforcement learning (RL). Mainly, the ever-changing system properties would continuously affect how the RL agent interacts with the state space through its actions, which effectively (re-)introduces concept drifts to the underlying policy learning process. We postulated that higher adaptability for the control policy can be achieved by characterizing and representing actions with extra "degrees of freedom" and thereby, with greater flexibility, adjusts to variations from the action's "behavioral" outcomes, including how these actions get carried out in real time and the shift in the action set itself. This paper proposes a Bayesian-flavored generalized RL framework by first establishing the notion of parametric action model to better cope with uncertainty and fluid action behaviors, followed by introducing the notion of reinforcement field as a physics-inspired construct established through "polarized experience particles" maintained in the RL agent's working memory. These particles effectively encode the agent's dynamic learning experience that evolves over time in a self-organizing way. Using the reinforcement field as a substrate, we will further generalize the policy search to incorporate high-level decision concepts by viewing the past memory as an implicit graph structure, in which the memory instances, or particles, are interconnected with their degrees of associability/similarity defined and quantified such that the "associative memory" principle can be consistently applied to establish and augment the learning agent's evolving world model.


Increasing Fairness in Predictions Using Bias Parity Score Based Loss Function Regularization

arXiv.org Artificial Intelligence

The use of automated decision support and decision-making systems (ADM) (Hardt, Price, and Srebro 2016) in applications with direct impact on people's lives has increasingly become a fact of life, e,g. in criminal justice (Kleinberg, Contributions. We propose a technique that uses Bias Mullainathan, and Raghavan 2016; Jain et al. 2020b; Dressel Parity Score (BPS) measures to characterize fairness and develop and Farid 2018), medical diagnosis (Kleinberg, Mullainathan, a family of corresponding loss functions that are used and Raghavan 2016; Ahsen, Ayvaci, and Raghunathan as regularizers during training of Neural Networks to enhance 2019), insurance (Baudry and Robert 2019), credit fairness of the trained models. The goal here is to permit card fraud detection (Dal Pozzolo et al. 2014), electronic the system to actively pursue fair solutions during training health record data (Gianfrancesco et al. 2018), credit scoring while maintaining as high a performance on the task as (Huang, Chen, and Wang 2007) and many more diverse possible. We apply the approach in the context of several domains. This, in turn, has lead to an urgent need fairness measures and investigate multiple loss function formulations for study and scrutiny of the bias-magnifying effects of machine and regularization weights in order to study the learning and Artificial Intelligence algorithms and thus performance as well as potential drawbacks and deployment their potential to introduce and emphasize social inequalities considerations. In these experiments we show that, if used and systematic discrimination in our society. Appropriately, with appropriate settings, the technique measurably reduces much research is being done currently to mitigate bias race-based bias in recidivism prediction, and demonstrate in AI-based decision support systems (Ahsen, Ayvaci, and on the gender-based Adult Income dataset that the proposed Raghunathan 2019; Kleinberg, Mullainathan, and Raghavan method can outperform state-of-the art techniques aimed at 2016; Noriega-Campero et al. 2019; Feldman 2015; more targeted aspects of bias and fairness.


Learning the Next Best View for 3D Point Clouds via Topological Features

arXiv.org Artificial Intelligence

In this paper, we introduce a reinforcement learning approach utilizing a novel topology-based information gain metric for directing the next best view of a noisy 3D sensor. The metric combines the disjoint sections of an observed surface to focus on high-detail features such as holes and concave sections. Experimental results show that our approach can aid in establishing the placement of a robotic sensor to optimize the information provided by its streaming point cloud data. Furthermore, a labeled dataset of 3D objects, a CAD design for a custom robotic manipulator, and software for the transformation, union, and registration of point clouds has been publicly released to the research community.


Semi-Unsupervised Clustering Using Reinforcement Learning

AAAI Conferences

Clusters defined over a dataset by unsupervised clustering often present groupings which differ from the expected solution. This is primarily the case when some scarce knowledge of the problem exists beforehand that partially identifies desired characteristics of clusters. However conventional clustering algorithms are not defined to expect any supervision from the external world, as they are supposed to be completely unsupervised. As a result they can not benefit or effectively take into account available information about the use or properties of the clusters. In this paper we propose a reinforcement learning approach to address this problem where existing, unmodified unsupervised clustering algorithms are augmented in a way that the available sparse information is utilized to achieve more appropriate clusters. Our model works with any clustering algorithm, but the input to the algorithm, instead of being the original dataset, is a scaled version of the same, where the scaling factors are determined by the reinforcement learning algorithm.