Schirner, Gunar
Reinforcement Learning-Based Model Matching to Reduce the Sim-Real Gap in COBRA
Salagame, Adarsh, Nallaguntla, Harin Kumar, Sihite, Eric, Schirner, Gunar, Ramezani, Alireza
Abstract-- This paper employs a reinforcement learningbased model identification method aimed at enhancing the accuracy of the dynamics for our snake robot, called COBRA. Leveraging gradient information and iterative optimization, the proposed approach refines the parameters of COBRA's dynamical model such as coefficient of friction and actuator parameters using experimental and simulated data. Experimental validation on the hardware platform demonstrates the efficacy of the proposed approach, highlighting its potential to address simto-real gap in robot implementation. These systems present formidable challenges in modeling and control due to the complex interplay of unilateral contact forces, leading to intricate complementarity conditions [2]. Traditional approaches have previously tackled these force inclusion issues with promising outcomes [3], [4].
Dynamic Posture Manipulation During Tumbling for Closed-Loop Heading Angle Control
Salagame, Adarsh, Sihite, Eric, Schirner, Gunar, Ramezani, Alireza
Abstract-- Passive tumbling uses natural forces like gravity for efficient travel. But without an active means of control, passive tumblers must rely entirely on external forces. Northeastern University's COBRA is a snake robot that can morph into a ring, which employs passive tumbling to traverse down slopes. However, due to its articulated joints, it is also capable of dynamically altering its posture to manipulate the dynamics of the tumbling locomotion for active steering. This paper presents a modelling and control strategy based on collocation optimization for real-time steering of COBRA's tumbling locomotion.
Loco-Manipulation with Nonimpulsive Contact-Implicit Planning in a Slithering Robot
Salagame, Adarsh, Gangaraju, Kruthika, Nallaguntla, Harin Kumar, Sihite, Eric, Schirner, Gunar, Ramezani, Alireza
Abstract-- Object manipulation has been extensively studied in the context of fixed base and mobile manipulators. However, the overactuated locomotion modality employed by snake robots allows for a unique blend of object manipulation through locomotion, referred to as loco-manipulation. The following work presents an optimization approach to solving the locomanipulation problem based on non-impulsive implicit contact path planning for our snake robot COBRA. We present the mathematical framework and show high-fidelity simulation results and experiments to demonstrate the effectiveness of our approach. These approaches have found widespread application across various locomotion modalities, such as legged and slithering locomotion, showcasing remarkable adopt the concertina gait, outlined in [28], characterized efficacy, including rapid contact planning in terrestrial environments by coiling and uncoiling actions to progress longitudinally.
Non-impulsive Contact-Implicit Motion Planning for Morpho-functional Loco-manipulation
Salagame, Adarsh, Gangaraju, Kruthika, Nallaguntla, Harin Kumar, Sihite, Eric, Schirner, Gunar, Ramezani, Alireza
Abstract-- Object manipulation has been extensively studied in the context of fixed base and mobile manipulators. However, the overactuated locomotion modality employed by snake robots allows for a unique blend of object manipulation through locomotion, referred to as loco-manipulation. The following work presents an optimization approach to solving the locomanipulation problem based on non-impulsive implicit contact path planning for our snake robot COBRA. We present the mathematical framework and show high fidelity simulation results for fixed-shape lateral rolling trajectories that demonstrate the object manipulation. I. INTRODUCTION Snake locomotion encompasses various techniques tailored for different environments and challenges.
Enhancing Automatic Modulation Recognition for IoT Applications Using Transformers
Rashvand, Narges, Witham, Kenneth, Maldonado, Gabriel, Katariya, Vinit, Prabhu, Nishanth Marer, Schirner, Gunar, Tabkhi, Hamed
Automatic modulation recognition (AMR) is vital for accurately identifying modulation types within incoming signals, a critical task for optimizing operations within edge devices in IoT ecosystems. This paper presents an innovative approach that leverages Transformer networks, initially designed for natural language processing, to address the challenges of efficient AMR. Our transformer network architecture is designed with the mindset of real-time edge computing on IoT devices. Four tokenization techniques are proposed and explored for creating proper embeddings of RF signals, specifically focusing on overcoming the limitations related to the model size often encountered in IoT scenarios. Extensive experiments reveal that our proposed method outperformed advanced deep learning techniques, achieving the highest recognition accuracy. Notably, our model achieves an accuracy of 65.75 on the RML2016 and 65.80 on the CSPB.ML.2018+ dataset.
Multistatic-Radar RCS-Signature Recognition of Aerial Vehicles: A Bayesian Fusion Approach
Potter, Michael, Akcakaya, Murat, Necsoiu, Marius, Schirner, Gunar, Erdogmus, Deniz, Imbiriba, Tales
Radar Automated Target Recognition (RATR) for Unmanned Aerial Vehicles (UAVs) involves transmitting Electromagnetic Waves (EMWs) and performing target type recognition on the received radar echo, crucial for defense and aerospace applications. Previous studies highlighted the advantages of multistatic radar configurations over monostatic ones in RATR. However, fusion methods in multistatic radar configurations often suboptimally combine classification vectors from individual radars probabilistically. To address this, we propose a fully Bayesian RATR framework employing Optimal Bayesian Fusion (OBF) to aggregate classification probability vectors from multiple radars. OBF, based on expected 0-1 loss, updates a Recursive Bayesian Classification (RBC) posterior distribution for target UAV type, conditioned on historical observations across multiple time steps. We evaluate the approach using simulated random walk trajectories for seven drones, correlating target aspect angles to Radar Cross Section (RCS) measurements in an anechoic chamber. Comparing against single radar Automated Target Recognition (ATR) systems and suboptimal fusion methods, our empirical results demonstrate that the OBF method integrated with RBC significantly enhances classification accuracy compared to other fusion methods and single radar configurations.
Segmentation and Classification of EMG Time-Series During Reach-to-Grasp Motion
Han, Mo, Zandigohar, Mehrshad, Furmanek, Mariusz P., Yarossi, Mathew, Schirner, Gunar, Erdogmus, Deniz
The electromyography (EMG) signals have been widely utilized in human robot interaction for extracting user hand and arm motion instructions. A major challenge of the online interaction with robots is the reliable EMG recognition from real-time data. However, previous studies mainly focused on using steady-state EMG signals with a small number of grasp patterns to implement classification algorithms, which is insufficient to generate robust control regarding the dynamic muscular activity variation in practice. Introducing more EMG variability during training and validation could implement a better dynamic-motion detection, but only limited research focused on such grasp-movement identification, and all of those assessments on the non-static EMG classification require supervised ground-truth label of the movement status. In this study, we propose a framework for classifying EMG signals generated from continuous grasp movements with variations on dynamic arm/hand postures, using an unsupervised motion status segmentation method. We collected data from large gesture vocabularies with multiple dynamic motion phases to encode the transitions from one intent to another based on common sequences of the grasp movements. Two classifiers were constructed for identifying the motion-phase label and grasp-type label, where the dynamic motion phases were segmented and labeled in an unsupervised manner. The proposed framework was evaluated in real-time with the accuracy variation over time presented, which was shown to be efficient due to the high degree of freedom of the EMG data.
Multimodal Fusion of EMG and Vision for Human Grasp Intent Inference in Prosthetic Hand Control
Zandigohar, Mehrshad, Han, Mo, Sharif, Mohammadreza, Gunay, Sezen Yagmur, Furmanek, Mariusz P., Yarossi, Mathew, Bonato, Paolo, Onal, Cagdas, Padir, Taskin, Erdogmus, Deniz, Schirner, Gunar
For lower arm amputees, robotic prosthetic hands offer the promise to regain the capability to perform fine object manipulation in activities of daily living. Current control methods based on physiological signals such as EEG and EMG are prone to poor inference outcomes due to motion artifacts, variability of skin electrode junction impedance over time, muscle fatigue, and other factors. Visual evidence is also susceptible to its own artifacts, most often due to object occlusion, lighting changes, variable shapes of objects depending on view-angle, among other factors. Multimodal evidence fusion using physiological and vision sensor measurements is a natural approach due to the complementary strengths of these modalities. In this paper, we present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, gaze, and EMG from the forearm processed by neural network models. We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it. For this purpose, we have also developed novel data processing and augmentation techniques to train neural network components. Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time. Specifically, results indicate that, on average, fusion improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66% and 14.8%, relative to EMG and visual evidence individually. An overall fusion accuracy of 95.3% among 13 labels (compared to a chance level of 7.7%) is achieved, and more detailed analysis indicate that the correct grasp is inferred sufficiently early and with high confidence compared to the top contender, in order to allow successful robot actuation to close the loop.
From Hand-Perspective Visual Information to Grasp Type Probabilities: Deep Learning via Ranking Labels
Han, Mo, Günay, Sezen Ya{ğ}mur, Yıldız, İlkay, Bonato, Paolo, Onal, Cagdas D., Padır, Taşkın, Schirner, Gunar, Erdo{ğ}muş, Deniz
Limb deficiency severely affects the daily lives of amputees and drives efforts to provide functional robotic prosthetic hands to compensate this deprivation. Convolutional neural network-based computer vision control of the prosthetic hand has received increased attention as a method to replace or complement physiological signals due to its reliability by training visual information to predict the hand gesture. Mounting a camera into the palm of a prosthetic hand is proved to be a promising approach to collect visual data. However, the grasp type labelled from the eye and hand perspective may differ as object shapes are not always symmetric. Thus, to represent this difference in a realistic way, we employed a dataset containing synchronous images from eye- and hand- view, where the hand-perspective images are used for training while the eye-view images are only for manual labelling. Electromyogram (EMG) activity and movement kinematics data from the upper arm are also collected for multi-modal information fusion in future work. Moreover, in order to include human-in-the-loop control and combine the computer vision with physiological signal inputs, instead of making absolute positive or negative predictions, we build a novel probabilistic classifier according to the Plackett-Luce model. To predict the probability distribution over grasps, we exploit the statistical model over label rankings to solve the permutation domain problems via a maximum likelihood estimation, utilizing the manually ranked lists of grasps as a new form of label. We indicate that the proposed model is applicable to the most popular and productive convolutional neural network frameworks.
HANDS: A Multimodal Dataset for Modeling Towards Human Grasp Intent Inference in Prosthetic Hands
Han, Mo, Günay, Sezen Ya{ğ}mur, Schirner, Gunar, Padır, Taşkın, Erdo{ğ}muş, Deniz
Upper limb and hand functionality is critical to many activities of daily living and the amputation of one can lead to significant functionality loss for individuals. From this perspective, advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user, but more importantly from the improved capability to infer human intent from multimodal sensor data to provide the robotic hand perception abilities regarding the operational context. Such multimodal sensor data may include various environment sensors including vision, as well as human physiology and behavior sensors including electromyography and inertial measurement units. A fusion methodology for environmental state and human intent estimation can combine these sources of evidence in order to help prosthetic hand motion planning and control. In this paper, we present a dataset of this type that was gathered with the anticipation of cameras being built into prosthetic hands, and computer vision methods will need to assess this hand-view visual evidence in order to estimate human intent. Specifically, paired images from human eye-view and hand-view of various objects placed at different orientations have been captured at the initial state of grasping trials, followed by paired video, EMG and IMU from the arm of the human during a grasp, lift, put-down, and retract style trial structure. For each trial, based on eye-view images of the scene showing the hand and object on a table, multiple humans were asked to sort in decreasing order of preference, five grasp types appropriate for the object in its given configuration relative to the hand. The potential utility of paired eye-view and hand-view images was illustrated by training a convolutional neural network to process hand-view images in order to predict eye-view labels assigned by humans.