radar
- North America > United States > Texas (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > Middle East > Israel (0.04)
MVDoppler: Unleashing the Power of Multi-View Doppler for MicroMotion-based Gait Classification
Modern perception systems rely heavily on high-resolution cameras, LiDARs, and advanced deep neural networks, enabling exceptional performance across various applications. However, these optical systems predominantly depend on geometric features and shapes of objects, which can be challenging to capture in long-range perception applications. To overcome this limitation, alternative approaches such as Doppler-based perception using high-resolution radars have been proposed. Doppler-based systems are capable of measuring micro-motions of targets remotely and with very high precision. When compared to geometric features, the resolution of micro-motion features exhibits significantly greater resilience to the influence of distance. However, the true potential of Doppler-based perception has yet to be fully realized due to several factors. These include the unintuitive nature of Doppler signals, the limited availability of public Doppler datasets, and the current datasets' inability to capture the specific co-factors that are unique to Doppler-based perception, such as the effect of the radar's observation angle and the target's motion trajectory.This paper introduces a new large multi-view Doppler dataset together with baseline perception models for micro-motion-based gait analysis and classification.
RADAR: Robust AI-Text Detection via Adversarial Learning
Recent advances in large language models (LLMs) and the intensifying popularity of ChatGPT-like applications have blurred the boundary of high-quality text generation between humans and machines. However, in addition to the anticipated revolutionary changes to our technology and society, the difficulty of distinguishing LLM-generated texts (AI-text) from human-generated texts poses new challenges of misuse and fairness, such as fake content generation, plagiarism, and false accusations of innocent writers. While existing works show that current AI-text detectors are not robust to LLM-based paraphrasing, this paper aims to bridge this gap by proposing a new framework called RADAR, which jointly trains a $\underline{r}$obust $\underline{A}$I-text $\underline{d}$etector via $\underline{a}$dversarial lea$\underline{r}$ning. RADAR is based on adversarial training of a paraphraser and a detector. The paraphraser's goal is to generate realistic content to evade AI-text detection.RADAR uses the feedback from the detector to update the paraphraser, and vice versa.Evaluated with 8 different LLMs (Pythia, Dolly 2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets, experimental results show that RADAR significantly outperforms existing AI-text detection methods, especially when paraphrasing is in place. We also identify the strong transferability of RADAR from instruction-tuned LLMs to other LLMs, and evaluate the improved capability of RADAR via GPT-3.5-Turbo.
RLCNet: An end-to-end deep learning framework for simultaneous online calibration of LiDAR, RADAR, and Camera
Cholakkal, Hafeez Husain, Arrigoni, Stefano, Braghin, Francesco
UTONOMOUS vehicles are poised to revolutionize transportation by improving road safety, reducing traffic congestion, and increasing mobility convenience [1]. To perceive and interact with their environment accurately, these vehicles rely on a combination of complementary sensors, including LiDAR, RADAR, and cameras. Each sensor offers unique advantages: cameras capture rich visual detail, LiDAR provides precise 3D spatial measurements, and RADAR performs robustly under adverse weather conditions [2]. Sensor fusion leverages the strengths of these modalities to ensure redundancy and resilience, allowing the vehicle to maintain accurate perception in diverse and dynamic environments [3]. A critical component of sensor fusion is extrinsic calibration, which involves the determination of the relative positions and orientations of sensors in a common coordinate frame. However, maintaining precise calibration over time is a persistent challenge. Factors such as mechanical vibrations, temperature changes, and minor collisions can lead to sensor drift, where even small misalignments in sensor orientation or position can result in substantial perception errors, potentially compromising vehicle safety.
- Europe > Netherlands > South Holland > Delft (0.04)
- Europe > Italy > Lombardy > Milan (0.04)
A Multi-Robot Platform for Robotic Triage Combining Onboard Sensing and Foundation Models
Hughes, Jason, Hussing, Marcel, Zhang, Edward, Kannapiran, Shenbagaraj, Caswell, Joshua, Chaney, Kenneth, Deng, Ruichen, Feehery, Michaela, Kratimenos, Agelos, Li, Yi Fan, Major, Britny, Sanchez, Ethan, Shrote, Sumukh, Wang, Youkang, Wang, Jeremy, Zein, Daudi, Zhang, Luying, Zhang, Ruijun, Zhou, Alex, Zhouga, Tenzi, Cannon, Jeremy, Qasim, Zaffir, Yelon, Jay, Cladera, Fernando, Daniilidis, Kostas, Taylor, Camillo J., Eaton, Eric
Abstract-- This report presents a heterogeneous robotic system designed for remote primary triage in mass-casualty incidents (MCIs). The system employs a coordinated air-ground team of unmanned aerial vehicles (UA Vs) and unmanned ground vehicles (UGVs) to locate victims, assess their injuries, and prioritize medical assistance without risking the lives of first responders. The UA V identify and provide overhead views of casualties, while UGVs equipped with specialized sensors measure vital signs and detect and localize physical injuries. Unlike previous work that focused on exploration or limited medical evaluation, this system addresses the complete triage process: victim localization, vital sign measurement, injury severity classification, mental status assessment, and data consolidation for first responders. Developed as part of the DARPA Triage Challenge, this approach demonstrates how multi-robot systems can augment human capabilities in disaster response scenarios to maximize lives saved. I. INTRODUCTION Robotics has long sought to augment human capabilities in hazardous scenarios. Mass-casualty incidents (MCIs), such as those resulting from natural disasters, bombings, plane crashes, or industrial chemical spills, present an opportunity for robotic systems to assist first responders. The critical first step of providing medical assistance during MCIs is primary triage: the initial process of locating victims at the site of the MCI and assessing the severity of their injuries to prioritize treatment, which is essential to optimizing survival outcomes. Traditionally, primary triage relies on human responders who may face significant risk and information overload [1], underscoring the potential for automated systems to mitigate these challenges. While prior efforts have explored the use of air-ground robotic teams for search and exploration in disaster zones [2]-[5], few systems have focused specifically on rapid triage. Existing approaches typically solve parts of the problem in isolation without integrating comprehensive triage functions. For example, air-ground teams have also been developed to find and localize objects of interest [3], [6] Authors are with the GRASP Lab, School of Engineering and Applied Sciences, University of Pennsylvania. Authors are with the Perelman School of Medicine, University of Pennsylvania. This work was supported by the DARP A Triage Challenge under grant #HR001123S0011.
- North America > United States > Pennsylvania (0.44)
- North America > United States > Texas (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Asia > Japan > Honshū > Kansai > Hyogo Prefecture > Kobe (0.04)
- Transportation > Air (1.00)
- Health & Medicine > Therapeutic Area (0.99)
- Health & Medicine > Diagnostic Medicine > Vital Signs (0.90)
- (2 more...)
Resource-Efficient Beam Prediction in mmWave Communications with Multimodal Realistic Simulation Framework
Park, Yu Min, Tun, Yan Kyaw, Huh, Eui-Nam, Saad, Walid, Hong, Choong Seon
Beamforming is a key technology in millimeter-wave (mmWave) communications that improves signal transmission by optimizing directionality and intensity. However, conventional channel estimation methods, such as pilot signals or beam sweeping, often fail to adapt to rapidly changing communication environments. To address this limitation, multimodal sensing-aided beam prediction has gained significant attention, using various sensing data from devices such as LiDAR, radar, GPS, and RGB images to predict user locations or network conditions. Despite its promising potential, the adoption of multimodal sensing-aided beam prediction is hindered by high computational complexity, high costs, and limited datasets. Thus, in this paper, a novel resource-efficient learning framework is introduced for beam prediction, which leverages a custom-designed cross-modal relational knowledge distillation (CRKD) algorithm specifically tailored for beam prediction tasks, to transfer knowledge from a multimodal network to a radar-only student model, achieving high accuracy with reduced computational cost. To enable multimodal learning with realistic data, a novel multimodal simulation framework is developed while integrating sensor data generated from the autonomous driving simulator CARLA with MATLAB-based mmWave channel modeling, and reflecting real-world conditions. The proposed CRKD achieves its objective by distilling relational information across different feature spaces, which enhances beam prediction performance without relying on expensive sensor data. Simulation results demonstrate that CRKD efficiently distills multimodal knowledge, allowing a radar-only model to achieve $94.62%$ of the teacher performance. In particular, this is achieved with just $10%$ of the teacher network's parameters, thereby significantly reducing computational complexity and dependence on multimodal sensor data.
- North America > United States > Virginia (0.04)
- North America > United States > Massachusetts > Middlesex County > Natick (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (3 more...)
- Education (0.67)
- Information Technology (0.66)
- Transportation > Ground > Road (0.48)
Novel UWB Synthetic Aperture Radar Imaging for Mobile Robot Mapping
Premachandra, Charith, Tan, U-Xuan
Traditional exteroceptive sensors in mobile robots, such as LiDARs and cameras often struggle to perceive the environment in poor visibility conditions. Recently, radar technologies, such as ultra-wideband (UWB) have emerged as potential alternatives due to their ability to see through adverse environmental conditions (e.g. dust, smoke and rain). However, due to the small apertures with low directivity, the UWB radars cannot reconstruct a detailed image of its field of view (FOV) using a single scan. Hence, a virtual large aperture is synthesized by moving the radar along a mobile robot path. The resulting synthetic aperture radar (SAR) image is a high-definition representation of the surrounding environment. Hence, this paper proposes a pipeline for mobile robots to incorporate UWB radar-based SAR imaging to map an unknown environment. Finally, we evaluated the performance of classical feature detectors: SIFT, SURF, BRISK, AKAZE and ORB to identify loop closures using UWB SAR images. The experiments were conducted emulating adverse environmental conditions. The results demonstrate the viability and effectiveness of UWB SAR imaging for high-resolution environmental mapping and loop closure detection toward more robust and reliable robotic perception systems.
- Asia > Singapore (0.04)
- North America > United States > Colorado > Adams County (0.04)
Scalable Multisubject Vital Sign Monitoring With mmWave FMCW Radar and FPGA Prototyping
Benny, Jewel, Moudhgalya, Narahari N., Khan, Mujeev, Meena, Hemant Kumar, Wajid, Mohd, Srivastava, Abhishek
Abstract--In this work, we introduce an innovative approach to estimate the vital signs of multiple human subjects simultaneously in a non-contact way using a Frequency Modulated Continuous Wave (FMCW) radar-based system. This work also explores the ambitious goal of extending this capability to an arbitrary number of subjects and details the associated challenges, encompassing both hardware and theoretical limitations. Supported by rigorous experimental results and discussions, the paper paints a vivid picture of the system's potential to redefine vital sign monitoring. An FPGA-based implementation is also presented as proof of concept of an entirely hardware-based and portable solution to vitals monitoring, which improves upon previous works in a multitude of ways, offering 2.7x faster execution and 18.4% lesser Look-Up T able (LUT) utilization and providing over 7400x acceleration compared to its software counterpart. A promising solution to overcome these issues is radar sensing technology for HR and BR measurement, offering non-contact capabilities. This approach also extends to applications including sleep apnea detection [5], fall detection [6] and patient monitoring [7]. This work was supported by the Chips to Startup (C2S) program, Ministry of Electronics and Information Technology (MeitY), Govt. of India, IHub Mobility, IIIT Hyderabad, Kohli Center on Intelligent Systems (KCIS), IIIT Hyderabad and IHub Anubhuti-IIIT Delhi Foundation. Continuous-wave (CW) Doppler Radar systems have significantly advanced this field, addressing various technical challenges in HR and BR measurement [8] [9].
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States > Texas (0.04)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- (4 more...)