radar
- North America > United States > Texas (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > Middle East > Israel (0.04)
- Asia > China > Hong Kong > Sha Tin (0.04)
- North America > United States (0.04)
- Europe > Greece > West Greece > Patra (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)
Many new UK drone users must take theory test before flying outside
Many in the UK who unwrapped a new drone this Christmas may face a rude awakening next week, when they will have to take a theory test before being allowed to fly outdoors. From 1 January, those intending to fly drones or model aircraft weighing 100g or more outside must complete a Civil Aviation Authority (CCA) online theory test to get a Flyer ID - something previously only needed for heavier drones. The regulator believes up to half a million people in the UK may be impacted by its new requirements. CAA spokesperson Jonathan Nicholson said with drones becoming a common Christmas present it was important people knew how to comply with the law. With the new drone rules coming into force this week, all drone users must register, get a Flyer ID and follow the regulations, he said.
- North America > United States (0.31)
- North America > Central America (0.15)
- Europe > United Kingdom > Scotland (0.07)
- (14 more...)
- Transportation > Air (1.00)
- Government (1.00)
MVDoppler: Unleashing the Power of Multi-View Doppler for MicroMotion-based Gait Classification
Modern perception systems rely heavily on high-resolution cameras, LiDARs, and advanced deep neural networks, enabling exceptional performance across various applications. However, these optical systems predominantly depend on geometric features and shapes of objects, which can be challenging to capture in long-range perception applications. To overcome this limitation, alternative approaches such as Doppler-based perception using high-resolution radars have been proposed. Doppler-based systems are capable of measuring micro-motions of targets remotely and with very high precision. When compared to geometric features, the resolution of micro-motion features exhibits significantly greater resilience to the influence of distance. However, the true potential of Doppler-based perception has yet to be fully realized due to several factors. These include the unintuitive nature of Doppler signals, the limited availability of public Doppler datasets, and the current datasets' inability to capture the specific co-factors that are unique to Doppler-based perception, such as the effect of the radar's observation angle and the target's motion trajectory.This paper introduces a new large multi-view Doppler dataset together with baseline perception models for micro-motion-based gait analysis and classification.
RADAR: Robust AI-Text Detection via Adversarial Learning
Recent advances in large language models (LLMs) and the intensifying popularity of ChatGPT-like applications have blurred the boundary of high-quality text generation between humans and machines. However, in addition to the anticipated revolutionary changes to our technology and society, the difficulty of distinguishing LLM-generated texts (AI-text) from human-generated texts poses new challenges of misuse and fairness, such as fake content generation, plagiarism, and false accusations of innocent writers. While existing works show that current AI-text detectors are not robust to LLM-based paraphrasing, this paper aims to bridge this gap by proposing a new framework called RADAR, which jointly trains a $\underline{r}$obust $\underline{A}$I-text $\underline{d}$etector via $\underline{a}$dversarial lea$\underline{r}$ning. RADAR is based on adversarial training of a paraphraser and a detector. The paraphraser's goal is to generate realistic content to evade AI-text detection.RADAR uses the feedback from the detector to update the paraphraser, and vice versa.Evaluated with 8 different LLMs (Pythia, Dolly 2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets, experimental results show that RADAR significantly outperforms existing AI-text detection methods, especially when paraphrasing is in place. We also identify the strong transferability of RADAR from instruction-tuned LLMs to other LLMs, and evaluate the improved capability of RADAR via GPT-3.5-Turbo.
RLCNet: An end-to-end deep learning framework for simultaneous online calibration of LiDAR, RADAR, and Camera
Cholakkal, Hafeez Husain, Arrigoni, Stefano, Braghin, Francesco
UTONOMOUS vehicles are poised to revolutionize transportation by improving road safety, reducing traffic congestion, and increasing mobility convenience [1]. To perceive and interact with their environment accurately, these vehicles rely on a combination of complementary sensors, including LiDAR, RADAR, and cameras. Each sensor offers unique advantages: cameras capture rich visual detail, LiDAR provides precise 3D spatial measurements, and RADAR performs robustly under adverse weather conditions [2]. Sensor fusion leverages the strengths of these modalities to ensure redundancy and resilience, allowing the vehicle to maintain accurate perception in diverse and dynamic environments [3]. A critical component of sensor fusion is extrinsic calibration, which involves the determination of the relative positions and orientations of sensors in a common coordinate frame. However, maintaining precise calibration over time is a persistent challenge. Factors such as mechanical vibrations, temperature changes, and minor collisions can lead to sensor drift, where even small misalignments in sensor orientation or position can result in substantial perception errors, potentially compromising vehicle safety.
- Europe > Netherlands > South Holland > Delft (0.04)
- Europe > Italy > Lombardy > Milan (0.04)
A Multi-Robot Platform for Robotic Triage Combining Onboard Sensing and Foundation Models
Hughes, Jason, Hussing, Marcel, Zhang, Edward, Kannapiran, Shenbagaraj, Caswell, Joshua, Chaney, Kenneth, Deng, Ruichen, Feehery, Michaela, Kratimenos, Agelos, Li, Yi Fan, Major, Britny, Sanchez, Ethan, Shrote, Sumukh, Wang, Youkang, Wang, Jeremy, Zein, Daudi, Zhang, Luying, Zhang, Ruijun, Zhou, Alex, Zhouga, Tenzi, Cannon, Jeremy, Qasim, Zaffir, Yelon, Jay, Cladera, Fernando, Daniilidis, Kostas, Taylor, Camillo J., Eaton, Eric
Abstract-- This report presents a heterogeneous robotic system designed for remote primary triage in mass-casualty incidents (MCIs). The system employs a coordinated air-ground team of unmanned aerial vehicles (UA Vs) and unmanned ground vehicles (UGVs) to locate victims, assess their injuries, and prioritize medical assistance without risking the lives of first responders. The UA V identify and provide overhead views of casualties, while UGVs equipped with specialized sensors measure vital signs and detect and localize physical injuries. Unlike previous work that focused on exploration or limited medical evaluation, this system addresses the complete triage process: victim localization, vital sign measurement, injury severity classification, mental status assessment, and data consolidation for first responders. Developed as part of the DARPA Triage Challenge, this approach demonstrates how multi-robot systems can augment human capabilities in disaster response scenarios to maximize lives saved. I. INTRODUCTION Robotics has long sought to augment human capabilities in hazardous scenarios. Mass-casualty incidents (MCIs), such as those resulting from natural disasters, bombings, plane crashes, or industrial chemical spills, present an opportunity for robotic systems to assist first responders. The critical first step of providing medical assistance during MCIs is primary triage: the initial process of locating victims at the site of the MCI and assessing the severity of their injuries to prioritize treatment, which is essential to optimizing survival outcomes. Traditionally, primary triage relies on human responders who may face significant risk and information overload [1], underscoring the potential for automated systems to mitigate these challenges. While prior efforts have explored the use of air-ground robotic teams for search and exploration in disaster zones [2]-[5], few systems have focused specifically on rapid triage. Existing approaches typically solve parts of the problem in isolation without integrating comprehensive triage functions. For example, air-ground teams have also been developed to find and localize objects of interest [3], [6] Authors are with the GRASP Lab, School of Engineering and Applied Sciences, University of Pennsylvania. Authors are with the Perelman School of Medicine, University of Pennsylvania. This work was supported by the DARP A Triage Challenge under grant #HR001123S0011.
- North America > United States > Pennsylvania (0.44)
- North America > United States > Texas (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Asia > Japan > Honshū > Kansai > Hyogo Prefecture > Kobe (0.04)
- Transportation > Air (1.00)
- Health & Medicine > Therapeutic Area (0.99)
- Health & Medicine > Diagnostic Medicine > Vital Signs (0.90)
- (2 more...)
Resource-Efficient Beam Prediction in mmWave Communications with Multimodal Realistic Simulation Framework
Park, Yu Min, Tun, Yan Kyaw, Huh, Eui-Nam, Saad, Walid, Hong, Choong Seon
Beamforming is a key technology in millimeter-wave (mmWave) communications that improves signal transmission by optimizing directionality and intensity. However, conventional channel estimation methods, such as pilot signals or beam sweeping, often fail to adapt to rapidly changing communication environments. To address this limitation, multimodal sensing-aided beam prediction has gained significant attention, using various sensing data from devices such as LiDAR, radar, GPS, and RGB images to predict user locations or network conditions. Despite its promising potential, the adoption of multimodal sensing-aided beam prediction is hindered by high computational complexity, high costs, and limited datasets. Thus, in this paper, a novel resource-efficient learning framework is introduced for beam prediction, which leverages a custom-designed cross-modal relational knowledge distillation (CRKD) algorithm specifically tailored for beam prediction tasks, to transfer knowledge from a multimodal network to a radar-only student model, achieving high accuracy with reduced computational cost. To enable multimodal learning with realistic data, a novel multimodal simulation framework is developed while integrating sensor data generated from the autonomous driving simulator CARLA with MATLAB-based mmWave channel modeling, and reflecting real-world conditions. The proposed CRKD achieves its objective by distilling relational information across different feature spaces, which enhances beam prediction performance without relying on expensive sensor data. Simulation results demonstrate that CRKD efficiently distills multimodal knowledge, allowing a radar-only model to achieve $94.62%$ of the teacher performance. In particular, this is achieved with just $10%$ of the teacher network's parameters, thereby significantly reducing computational complexity and dependence on multimodal sensor data.
- North America > United States > Virginia (0.04)
- North America > United States > Massachusetts > Middlesex County > Natick (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (3 more...)
- Education (0.67)
- Information Technology (0.66)
- Transportation > Ground > Road (0.48)