The Victorian government has launched an aviation unit within Fire Rescue Victoria (FRV) that will be responsible for using drone technology to assist firefighters and other emergency services who are on-ground. The new unit will be staffed by four specialist firefighters, including qualified Civil Aviation Safety Authority drone pilots and aviation accredited personnel. As part of standing up the new FRV unit, four new drones will be made available to Victorian firefighters, which they can deploy to gather aerial images of fires and other emergencies. More specifically, the new drones, according to the Victorian government, feature high-definition thermal imaging and live-streaming cameras, have the ability to fly for up to 30 minutes, and withstand strong wind conditions to help better monitor fires and other incidents from the air. These new drones will be in addition to the existing drone services that are used by FRV.
General Atomics Aeronautical Systems, Inc. (GA-ASI) has demonstrated the DARPA-developed Collaborative Operations in Denied Environment (CODE) autonomy engine on the company's Avenger Unmanned Aircraft System (UAS). CODE was used in order to gain further understanding of cognitive Artificial Intelligence (AI) processing on larger UAS platforms for air-to-air targeting. Using a network-enabled Tactical Targeting Network Technology (TTNT) radio for mesh network mission communications, GA-ASI was able to demonstrate integration of emerging Advanced Tactical Data Links (ATDL), as well as separation between flight and mission critical systems. During the autonomous flight, CODE software controlled the manoeuvring of the Avenger UAS for over two hours without human pilot input. GA-ASI extended the base software behavioural functions for a coordinated air-to-air search with up to six aircraft, using five virtual aircraft for the purposes of the demonstration.
Verdict lists the top five terms tweeted on robotics in November 2020, based on data from GlobalData's Influencer Platform. The top tweeted terms are the trending industry discussions happening on Twitter by key individuals (influencers) as tracked by the platform. The role of artificial intelligence (AI) in solving all human problems, its application in chemical research and how it is driving new business models and productivity potential were popularly discussed in November. According to an article shared by Spiros Margaris, a venture capitalist, AI can help solve the world's most challenging problems right from using it to create diagnostic equipment to building unmanned aerial vehicles. The article noted that although some fear that AI and robotics will usurp all human jobs, it is the basis for all technological innovations such as driverless cars, smart personal agents, and autonomous drones, among others.
General Atomics Aeronautical Systems, Inc. (GA-ASI) has been awarded a contract by the U.S. Department of Defense's Joint Artificial Intelligence Center (JAIC) to develop enhanced autonomous sensing capabilities for unmanned aerial vehicles (UAVs). The JAIC Smart Sensor project aims to advance drone-based AI technology by demonstrating object recognition algorithms and employing onboard AI to automatically control UAV sensors and direct autonomous flight. GA-ASI will deploy these new capabilities on a MQ-9 Reaper UAV equipped with a variety of sensors, including GA-ASI's Reaper Defense Electronic Support System (RDESS) and Lynx Synthetic Aperture Radar (SAR). GA-ASI's Metis Intelligence, Surveillance and Reconnaissance (ISR) tasking and intelligence-sharing application, which enables operators to specify effects-based mission objectives and receive automatic notification of actionable intelligence, will be used to command the unmanned aircraft. J.R. Reid, GA-ASI Vice President of Strategic Development, commented: "GA-ASI is excited to leverage the considerable investment we have made to advance the JAIC's autonomous sensing objective. This will bring a tremendous increase in unmanned systems capabilities for applications across the full-range of military operations."
Unmanned aerial vehicles (UAVs) can be deployed to monitor very large areas without the need for network infrastructure. UAVs communicate with each other during flight and exchange information with each other. However, such communication poses security challenges due to its dynamic topology. To solve these challenges, the proposed method uses two phases to counter malicious UAV attacks. In the first phase, we applied a number of rules and principles to detect malicious UAVs. In this phase, we try to identify and remove malicious UAVs according to the behavior of UAVs in the network in order to prevent sending fake information to the investigating UAVs. In the second phase, a mobile agent based on a three-step negotiation process is used to eliminate malicious UAVs. In this way, we use mobile agents to inform our normal neighbor UAVs so that they do not listen to the data generated by the malicious UAVs. Therefore, the mobile agent of each UAV uses reliable neighbors through a three-step negotiation process so that they do not listen to the traffic generated by the malicious UAVs. The NS-3 simulator was used to demonstrate the efficiency of the SAUAV method. The proposed method is more efficient than CST-UAS, CS-AVN, HVCR, and BSUM-based methods in detection rate, false positive rate, false negative rate, packet delivery rate, and residual energy.
Two menacing men stand next to a white van in a field, holding remote controls. They open the van's back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. And existing defences are weak or nonexistent.
Be prepared in the near future when you gaze into the blue skies to perceive a whole series of strange-looking things – no, they will not be birds, nor planes, or even superman. They may be temporarily, and in some cases startlingly mistaken as UFOs, given their bizarre and ominous appearance. But, in due course, they will become recognized as valuable objects of a new era of human-made flying machines, intended to serve a broad range of missions and objectives. Many such applications are already incorporated and well entrenched in serving essential functions for extending capabilities in our vital infrastructures such as transportation, utilities, the electric grid, agriculture, emergency services, and many others. Rapidly advancing technologies have made possible the dramatic capabilities of unmanned aerial vehicles (UAV/drones) to uniquely perform various functions that were inconceivable a mere few years ago.
Unmanned Aerial Systems (UAS) are being increasingly deployed for commercial, civilian, and military applications. The current UAS state-of-the-art still depends on a remote human controller with robust wireless links to perform several of these applications. The lack of autonomy restricts the domains of application and tasks for which a UAS can be deployed. Enabling autonomy and intelligence to the UAS will help overcome this hurdle and expand its use improving safety and efficiency. The exponential increase in computing resources and the availability of large amount of data in this digital era has led to the resurgence of machine learning from its last winter. Therefore, in this chapter, we discuss how some of the advances in machine learning, specifically deep learning and reinforcement learning can be leveraged to develop next-generation autonomous UAS. We first begin motivating this chapter by discussing the application, challenges, and opportunities of the current UAS in the introductory section. We then provide an overview of some of the key deep learning and reinforcement learning techniques discussed throughout this chapter. A key area of focus that will be essential to enable autonomy to UAS is computer vision. Accordingly, we discuss how deep learning approaches have been used to accomplish some of the basic tasks that contribute to providing UAS autonomy. Then we discuss how reinforcement learning is explored for using this information to provide autonomous control and navigation for UAS. Next, we provide the reader with directions to choose appropriate simulation suites and hardware platforms that will help to rapidly prototype novel machine learning based solutions for UAS. We additionally discuss the open problems and challenges pertaining to each aspect of developing autonomous UAS solutions to shine light on potential research areas.
In mid-July, a UPS subsidiary called Flight Forward and the drone company Matternet started a project with the Wake Forest Baptist Health system in North Carolina. The companies' aims are decidedly futuristic: to ferry specialty medicines and protective equipment between two of the system's facilities, less than a half-mile apart. Think of it: little flying machines, zipping about at speeds up to 43 mph, bearing the goods to heal. At this point, though, the drone operations are a little, well, human. The quadcopters must be operated by specialized drone pilots, who must pass a challenging aeronautical knowledge test to get their licenses.
Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.