Collaborating Authors


DARPA CODE Autonomy Engine Demonstrated on Avenger UAS


General Atomics Aeronautical Systems, Inc. (GA-ASI) has demonstrated the DARPA-developed Collaborative Operations in Denied Environment (CODE) autonomy engine on the company's Avenger Unmanned Aircraft System (UAS). CODE was used in order to gain further understanding of cognitive Artificial Intelligence (AI) processing on larger UAS platforms for air-to-air targeting. Using a network-enabled Tactical Targeting Network Technology (TTNT) radio for mesh network mission communications, GA-ASI was able to demonstrate integration of emerging Advanced Tactical Data Links (ATDL), as well as separation between flight and mission critical systems. During the autonomous flight, CODE software controlled the manoeuvring of the Avenger UAS for over two hours without human pilot input. GA-ASI extended the base software behavioural functions for a coordinated air-to-air search with up to six aircraft, using five virtual aircraft for the purposes of the demonstration.

Deep Learning and Reinforcement Learning for Autonomous Unmanned Aerial Systems: Roadmap for Theory to Deployment Machine Learning

Unmanned Aerial Systems (UAS) are being increasingly deployed for commercial, civilian, and military applications. The current UAS state-of-the-art still depends on a remote human controller with robust wireless links to perform several of these applications. The lack of autonomy restricts the domains of application and tasks for which a UAS can be deployed. Enabling autonomy and intelligence to the UAS will help overcome this hurdle and expand its use improving safety and efficiency. The exponential increase in computing resources and the availability of large amount of data in this digital era has led to the resurgence of machine learning from its last winter. Therefore, in this chapter, we discuss how some of the advances in machine learning, specifically deep learning and reinforcement learning can be leveraged to develop next-generation autonomous UAS. We first begin motivating this chapter by discussing the application, challenges, and opportunities of the current UAS in the introductory section. We then provide an overview of some of the key deep learning and reinforcement learning techniques discussed throughout this chapter. A key area of focus that will be essential to enable autonomy to UAS is computer vision. Accordingly, we discuss how deep learning approaches have been used to accomplish some of the basic tasks that contribute to providing UAS autonomy. Then we discuss how reinforcement learning is explored for using this information to provide autonomous control and navigation for UAS. Next, we provide the reader with directions to choose appropriate simulation suites and hardware platforms that will help to rapidly prototype novel machine learning based solutions for UAS. We additionally discuss the open problems and challenges pertaining to each aspect of developing autonomous UAS solutions to shine light on potential research areas.

Human-in-the-Loop Methods for Data-Driven and Reinforcement Learning Systems Artificial Intelligence

Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.

A Game Theoretical Framework for the Evaluation of Unmanned Aircraft Systems Airspace Integration Concepts Machine Learning

Predicting the outcomes of integrating Unmanned Aerial Systems (UAS) into the National Aerospace (NAS) is a complex problem which is required to be addressed by simulation studies before allowing the routine access of UAS into the NAS. This thesis focuses on providing 2D and 3D simulation frameworks using a game theoretical methodology to evaluate integration concepts in scenarios where manned and unmanned air vehicles co-exist. The fundamental gap in the literature is that the models of interaction between manned and unmanned vehicles are insufficient: a) they assume that pilot behavior is known a priori and b) they disregard decision making processes. The contribution of this work is to propose a modeling framework, in which, human pilot reactions are modeled using reinforcement learning and a game theoretical concept called level-k reasoning to fill this gap. The level-k reasoning concept is based on the assumption that humans have various levels of decision making. Reinforcement learning is a mathematical learning method that is rooted in human learning. In this work, a classical and an approximate reinforcement learning (Neural Fitted Q Iteration) methods are used to model time-extended decisions of pilots with 2D and 3D maneuvers. An analysis of UAS integration is conducted using example scenarios in the presence of manned aircraft and fully autonomous UAS equipped with sense and avoid algorithms.

Design Challenges of Multi-UAV Systems in Cyber-Physical Applications: A Comprehensive Survey, and Future Directions Artificial Intelligence

Unmanned Aerial Vehicles (UAVs) have recently rapidly grown to facilitate a wide range of innovative applications that can fundamentally change the way cyber-physical systems (CPSs) are designed. CPSs are a modern generation of systems with synergic cooperation between computational and physical potentials that can interact with humans through several new mechanisms. The main advantages of using UAVs in CPS application is their exceptional features, including their mobility, dynamism, effortless deployment, adaptive altitude, agility, adjustability, and effective appraisal of real-world functions anytime and anywhere. Furthermore, from the technology perspective, UAVs are predicted to be a vital element of the development of advanced CPSs. Therefore, in this survey, we aim to pinpoint the most fundamental and important design challenges of multi-UAV systems for CPS applications. We highlight key and versatile aspects that span the coverage and tracking of targets and infrastructure objects, energy-efficient navigation, and image analysis using machine learning for fine-grained CPS applications. Key prototypes and testbeds are also investigated to show how these practical technologies can facilitate CPS applications. We present and propose state-of-the-art algorithms to address design challenges with both quantitative and qualitative methods and map these challenges with important CPS applications to draw insightful conclusions on the challenges of each application. Finally, we summarize potential new directions and ideas that could shape future research in these areas.

How Drones Will Impact Society: From Fighting War to Forecasting Weather, UAVs Change Everything


UAVs are tackling everything from disease control to vacuuming up ocean waste to delivering pizza, and more. Drone technology has been used by defense organizations and tech-savvy consumers for quite some time. However, the benefits of this technology extends well beyond just these sectors. With the rising accessibility of drones, many of the most dangerous and high-paying jobs within the commercial sector are ripe for displacement by drone technology. The use cases for safe, cost-effective solutions range from data collection to delivery. And as autonomy and collision-avoidance technologies improve, so too will drones' ability to perform increasingly complex tasks. According to forecasts, the emerging global market for business services using drones is valued at over $127B. As more companies look to capitalize on these commercial opportunities, investment into the drone space continues to grow. A drone or a UAV (unmanned aerial vehicle) typically refers to a pilotless aircraft that operates through a combination of technologies, including computer vision, artificial intelligence, object avoidance tech, and others. But drones can also be ground or sea vehicles that operate autonomously.

From Energy To Telecom: 30 Big Industries Drones Could Disrupt


Energy, insurance, telecommunications, and many other industries could also have drones in their future.

Military drones set to replace police helicopters by 2025

Daily Mail - Science & tech

Military drones that can fly for more than 40 hours and stream footage of US cities will replace police helicopters by 2025, experts claim. Multiple defence companies are now racing to build unmanned aircraft that will be allowed to fly in US airspace - which is incredibly tightly controlled. Leading the race is a long-winged craft called MQ-9B, created by Californian-based company General Atomics. This could allow law enforcement to stream video of cities from 2,000 feet (50 metres) high using cameras that are powerful enough to pick out individual faces from a crowd. Californian-based company General Atomics is investing heaving in a long-winged craft called MQ-9B and are aiming to receive FAA certification to fly in 2025.

Unmanned Flight: The Drones Come Home - Pictures, More From National Geographic Magazine

AITopics Original Links

It's not a vulture or crow but a Falcon--a new brand of unmanned aerial vehicle, or drone, and Johnson is flying it. The sheriff's office here in Mesa County, a plateau of farms and ranches corralled by bone-hued mountains, is weighing the Falcon's potential for spotting lost hikers and criminals on the lam. A laptop on a table in front of Johnson shows the drone's flickering images of a nearby highway. Standing behind Johnson, watching him watch the Falcon, is its designer, Chris Miser. Rock-jawed, arms crossed, sunglasses pushed atop his shaved head, Miser is a former Air Force captain who worked on military drones before quitting in 2007 to found his own company in Aurora, Colorado. The Falcon has an eight-foot wingspan but weighs just 9.5 pounds.