ground sensor
LLM-Enabled In-Context Learning for Data Collection Scheduling in UAV-assisted Sensor Networks
Emami, Yousef, Zhou, Hao, Nabavirazani, SeyedSina, Almeida, Luis
Unmanned Aerial Vehicles (UAVs) are increasingly being utilized in various private and commercial applications, e.g., traffic control, parcel delivery, and Search and Rescue (SAR) missions. Machine Learning (ML) methods used in UAV-Assisted Sensor Networks (UASNETs) and, especially, in Deep Reinforcement Learning (DRL) face challenges such as complex and lengthy model training, gaps between simulation and reality, and low sampling efficiency, which conflict with the urgency of emergencies, such as SAR missions. In this paper, an In-Context Learning (ICL)-Data Collection Scheduling (ICLDC) system is proposed as an alternative to DRL in emergencies. The UAV collects sensory data and transmits it to a Large Language Model (LLM), which creates a task description in natural language. From this description, the UAV receives a data collection schedule that must be executed. A verifier ensures safe UAV operations by evaluating the schedules generated by the LLM and overriding unsafe schedules based on predefined rules. The system continuously adapts by incorporating feedback into the task descriptions and using this for future decisions. This method is tested against jailbreaking attacks, where the task description is manipulated to undermine network performance, highlighting the vulnerability of LLMs to such attacks. The proposed ICLDC significantly reduces cumulative packet loss compared to both the DQN and Maximum Channel Gain baselines. ICLDC presents a promising direction for intelligent scheduling and control in UASNETs.
- North America > Canada > Quebec > Montreal (0.14)
- Europe > Portugal > Porto > Porto (0.04)
- Oceania > Australia > Victoria > Bass Strait (0.04)
- (9 more...)
- Research Report (1.00)
- Overview (1.00)
- Telecommunications (1.00)
- Information Technology > Security & Privacy (1.00)
- Energy (1.00)
- Government > Military (0.68)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
FRSICL: LLM-Enabled In-Context Learning Flight Resource Allocation for Fresh Data Collection in UAV-Assisted Wildfire Monitoring
Emami, Yousef, Zhou, Hao, Gaitan, Miguel Gutierrez, Li, Kai, Almeida, Luis
--Unmanned Aerial V ehicles (UA Vs) are vital for public safety, particularly in wildfire monitoring, where early detection minimizes environmental impact. In UA V-Assisted Wildfire Monitoring (UA WM) systems, joint optimization of sensor transmission scheduling and velocity is critical for minimizing Age of Information (AoI) from stale sensor data. Deep Reinforcement Learning (DRL) has been used for such optimization; however, its limitations such as low sampling efficiency, simulation-to-reality gaps, and complex training render it unsuitable for time-critical applications like wildfire monitoring. This paper introduces a new online Flight Resource Allocation scheme based on LLM-Enabled In-Context Learning (FRSICL) to jointly optimize the UA V's flight control and data collection schedule along the trajectory in real time, thereby asymptotically minimizing the average AoI across ground sensors. In contrast to DRL, FRSICL generates data collection schedules and controls velocity using natural language task descriptions and feedback from the environment, enabling dynamic decision-making without extensive retraining. Simulation results confirm the effectiveness of the proposed FRSICL compared to Proximal Policy Optimization (PPO) and Nearest-Neighbor baselines. Nowadays, Unmanned Aerial V ehicles (UA Vs) have a wide range of applications in public safety [1], energy [2], and environmental monitoring [3]. Public safety UA Vs serve critical roles in emergency operations, including search and rescue (SAR), wildfire surveillance, and disaster management.
- North America > United States > Delaware > New Castle County > Wilmington (0.04)
- Europe > Slovenia > Central Slovenia > Municipality of Ljubljana > Ljubljana (0.04)
- Europe > Italy > Lazio > Rome (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Information Technology > Security & Privacy (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.89)
- Energy (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
Time Invariant Sensor Tasking for Catalog Maintenance of LEO Space objects using Stochastic Geometry
Chowdhury, Partha, M, Harsha, Georg, Chinni Prabhunath, Buduru, Arun Balaji, Biswas, Sanat K
Catalog maintenance of space objects by limited number of ground-based sensors presents a formidable challenging task to the space community. This article presents a methodology for time-invariant tracking and surveillance of space objects in low Earth orbit (LEO) by optimally directing ground sensors. Our methodology aims to maximize the expected number of space objects from a set of ground stations by utilizing concepts from stochastic geometry, particularly the Poisson point process. We have provided a systematic framework to understand visibility patterns and enhance the efficiency of tracking multiple objects simultaneously. Our approach contributes to more informed decision-making in space operations, ultimately supporting efforts to maintain safety and sustainability in LEO.
From Prompts to Protection: Large Language Model-Enabled In-Context Learning for Smart Public Safety UAV
Emami, Yousef, Zhou, Hao, Gaitan, Miguel Gutierrez, Li, Kai, Almeida, Luis, Han, Zhu
A public safety Unmanned Aerial Vehicle (UAV) enhances situational awareness in emergency response. Its agility and ability to optimize mobility and establish Line-of-Sight (LoS) communication make it increasingly vital for managing emergencies such as disaster response, search and rescue, and wildfire monitoring. While Deep Reinforcement Learning (DRL) has been applied to optimize UAV navigation and control, its high training complexity, low sample efficiency, and simulation-to-reality gap limit its practicality in public safety. Recent advances in Large Language Models (LLMs) offer a compelling alternative. With strong reasoning and generalization capabilities, LLMs can adapt to new tasks through In-Context Learning (ICL), which enables task adaptation via natural language prompts and example-based guidance, without retraining. Deploying LLMs at the network edge, rather than in the cloud, further reduces latency and preserves data privacy, thereby making them suitable for real-time, mission-critical public safety UAVs. This paper proposes the integration of LLM-enabled ICL with public safety UAV to address the key functions, such as path planning and velocity control, in the context of emergency response. We present a case study on data collection scheduling where the LLM-enabled ICL framework can significantly reduce packet loss compared to conventional approaches, while also mitigating potential jailbreaking vulnerabilities. Finally, we discuss LLM optimizers and specify future research directions. The ICL framework enables adaptive, context-aware decision-making for public safety UAV, thus offering a lightweight and efficient solution for enhancing UAV autonomy and responsiveness in emergencies.
- North America > United States > Ohio > Montgomery County > Dayton (0.04)
- Europe > Portugal > Porto > Porto (0.04)
- Europe > Middle East > Cyprus > Limassol > Limassol (0.04)
- (3 more...)
- Research Report (0.50)
- Overview (0.46)
Deep Reinforcement Learning for Joint Cruise Control and Intelligent Data Acquisition in UAVs-Assisted Sensor Networks
Unmanned aerial vehicle (UAV)-assisted sensor networks (UASNets), which play a crucial role in creating new opportunities, are experiencing significant growth in civil applications worldwide. UASNets improve disaster management through timely surveillance and advance precision agriculture with detailed crop monitoring, thereby significantly transforming the commercial economy. UASNets revolutionize the commercial sector by offering greater efficiency, safety, and cost-effectiveness, highlighting their transformative impact. A fundamental aspect of these new capabilities and changes is the collection of data from rugged and remote areas. Due to their excellent mobility and maneuverability, UAVs are employed to collect data from ground sensors in harsh environments, such as natural disaster monitoring, border surveillance, and emergency response monitoring. One major challenge in these scenarios is that the movements of UAVs affect channel conditions and result in packet loss. Fast movements of UAVs lead to poor channel conditions and rapid signal degradation, resulting in packet loss. On the other hand, slow mobility of a UAV can cause buffer overflows of the ground sensors, as newly arrived data is not promptly collected by the UAV. Our proposal to address this challenge is to minimize packet loss by jointly optimizing the velocity controls and data collection schedules of multiple UAVs.Furthermore, in UASNets, swift movements of UAVs result in poor channel conditions and fast signal attenuation, leading to an extended age of information (AoI). In contrast, slow movements of UAVs prolong flight time, thereby extending the AoI of ground sensors.To address this challenge, we propose a new mean-field flight resource allocation optimization to minimize the AoI of sensory data.
- North America > United States (0.14)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (8 more...)
- Research Report > Promising Solution (0.45)
- Overview > Innovation (0.45)
- Transportation (1.00)
- Telecommunications (1.00)
- Information Technology (1.00)
- (3 more...)
Flying eyes
A fleet of unmanned aerial vehicles will co-operate with a ground robot on surveillance tasks in the Australian Outback, in trials to be held next year by BAE Systems. The series of trials are being organised by researchers at the company's Advanced Technology Centre (ATC), to demonstrate its autonomous systems, data fusion and artificial intelligence technologies. The first of the trials, to be held next month with funding from the MoD, will see the team fly the fleet of small UAV research vehicles, fitted with a number of sensors, in a data-gathering exercise. This information will be used to develop algorithms to allow the UAVs to co-operate with each other and act in response to information gathered by aerial and ground sensors, in a trial to be held later in the year. This trial will in turn be followed by another demonstration towards the end of 2005, in which the UAVs will communicate and co-operate with a robot on the ground, said Dr Phil Greenway, head of advanced information processing at the ATC at Filton near Bristol.