Three years ago, Customs and Border Protection placed an order for self-flying aircraft that could launch on their own, rendezvous, locate and monitor multiple targets on the ground without any human intervention. In its reasoning for the order, CBP said the level of monitoring required to secure America's long land borders from the sky was too cumbersome for people alone. To research and build the drones, CBP handed $500,000 to Mitre Corp., a trusted nonprofit Skunk Works that was already furnishing border police with prototype rapid DNA testing and smartwatch hacking technology. They were "tested but not fielded operationally" as "the gap from simulation to reality turned out to be much larger than the research team originally envisioned," a CBP spokesperson says. This year, America's border police will test automated drones from Skydio, the Redwood City, Calif.-based startup that on Monday announced it had raised an additional $170 million in venture funding at a valuation of $1 billion. That brings the total raised for Skydio to $340 million.
Three years ago, Customs and Border Protection placed an order for self-flying aircraft that could launch on their own, rendezvous, locate and monitor multiple targets on the ground without any human intervention. In its reasoning for the order, CBP said the level of monitoring required to secure America's long land borders from the sky was too cumbersome for people alone. To research and build the drones, CBP handed $500,000 to Mitre Corp., a trusted nonprofit Skunk Works that was already furnishing border police with prototype rapid DNA testing and smartwatch hacking technology. They were "tested but not fielded operationally" as "the gap from simulation to reality turned out to be much larger than the research team originally envisioned," a CBP spokesperson says. This year, America's border police will test automated drones from Skydio, the Redwood City, Calif.-based startup that on Monday announced it had raised an additional $170 million in venture funding at a valuation of $1 billion.
General Atomics Aeronautical Systems, Inc. (GA-ASI) has demonstrated the DARPA-developed Collaborative Operations in Denied Environment (CODE) autonomy engine on the company's Avenger Unmanned Aircraft System (UAS). CODE was used in order to gain further understanding of cognitive Artificial Intelligence (AI) processing on larger UAS platforms for air-to-air targeting. Using a network-enabled Tactical Targeting Network Technology (TTNT) radio for mesh network mission communications, GA-ASI was able to demonstrate integration of emerging Advanced Tactical Data Links (ATDL), as well as separation between flight and mission critical systems. During the autonomous flight, CODE software controlled the manoeuvring of the Avenger UAS for over two hours without human pilot input. GA-ASI extended the base software behavioural functions for a coordinated air-to-air search with up to six aircraft, using five virtual aircraft for the purposes of the demonstration.
General Atomics Aeronautical Systems, Inc. (GA-ASI) has been awarded a contract by the U.S. Department of Defense's Joint Artificial Intelligence Center (JAIC) to develop enhanced autonomous sensing capabilities for unmanned aerial vehicles (UAVs). The JAIC Smart Sensor project aims to advance drone-based AI technology by demonstrating object recognition algorithms and employing onboard AI to automatically control UAV sensors and direct autonomous flight. GA-ASI will deploy these new capabilities on a MQ-9 Reaper UAV equipped with a variety of sensors, including GA-ASI's Reaper Defense Electronic Support System (RDESS) and Lynx Synthetic Aperture Radar (SAR). GA-ASI's Metis Intelligence, Surveillance and Reconnaissance (ISR) tasking and intelligence-sharing application, which enables operators to specify effects-based mission objectives and receive automatic notification of actionable intelligence, will be used to command the unmanned aircraft. J.R. Reid, GA-ASI Vice President of Strategic Development, commented: "GA-ASI is excited to leverage the considerable investment we have made to advance the JAIC's autonomous sensing objective. This will bring a tremendous increase in unmanned systems capabilities for applications across the full-range of military operations."
Two menacing men stand next to a white van in a field, holding remote controls. They open the van's back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. And existing defences are weak or nonexistent.
Be prepared in the near future when you gaze into the blue skies to perceive a whole series of strange-looking things – no, they will not be birds, nor planes, or even superman. They may be temporarily, and in some cases startlingly mistaken as UFOs, given their bizarre and ominous appearance. But, in due course, they will become recognized as valuable objects of a new era of human-made flying machines, intended to serve a broad range of missions and objectives. Many such applications are already incorporated and well entrenched in serving essential functions for extending capabilities in our vital infrastructures such as transportation, utilities, the electric grid, agriculture, emergency services, and many others. Rapidly advancing technologies have made possible the dramatic capabilities of unmanned aerial vehicles (UAV/drones) to uniquely perform various functions that were inconceivable a mere few years ago.
Recent researches on robotics have shown significant improvement, spanning from algorithms, mechanics to hardware architectures. Robotics, including manipulators, legged robots, drones, and autonomous vehicles, are now widely applied in diverse scenarios. However, the high computation and data complexity of robotic algorithms pose great challenges to its applications. On the one hand, CPU platform is flexible to handle multiple robotic tasks. GPU platform has higher computational capacities and easy-touse development frameworks, so they have been widely adopted in several applications. On the other hand, FPGA-based robotic accelerators are becoming increasingly competitive alternatives, especially in latency-critical and power-limited scenarios. With specialized designed hardware logic and algorithm kernels, FPGA-based accelerators can surpass CPU and GPU in performance and energy efficiency. In this paper, we give an overview of previous work on FPGA-based robotic accelerators covering different stages of the robotic system pipeline. An analysis of software and hardware optimization techniques and main technical issues is presented, along with some commercial and space applications, to serve as a guide for future work. Therefore, the computation and storage complexity, as well as real-time and power constraints of the robotic system, Over the last decade, we have seen significant progress hinders its wide application in latency-critical or power-limited in the development of robotics, spanning from algorithms, scenarios . Various robotic systems, like Therefore, it is essential to choose a proper compute platform manipulators, legged robots, unmanned aerial vehicles, selfdriving for the robotic system. CPU and GPU are two widely cars have been designed for search and rescue , , used commercial compute platforms. CPU is designed to exploration , , package delivery , entertainment , handle a wide range of tasks quickly and is often used to  and more applications and scenarios. These robots are develop novel algorithms. A typical CPU can achieve 10-on the rise of demonstrating their full potential. Take drones, 100 GFLOPS with below 1GOP/J power efficiency . In a type of aerial robots, for example, the number of drones contrast, GPU is designed with thousands of processor cores has grown by 2.83x between 2015 and 2019 based on the running simultaneously, which enable massive parallelism. The typical GPU can perform up to 10 TOPS performance and registered number has reached 1.32 million in 2019, and the become a good candidate for high-performance scenarios. Recently, FFA expects this number will come to 1.59 billion by 2024.
Unmanned Aerial Systems (UAS) are being increasingly deployed for commercial, civilian, and military applications. The current UAS state-of-the-art still depends on a remote human controller with robust wireless links to perform several of these applications. The lack of autonomy restricts the domains of application and tasks for which a UAS can be deployed. Enabling autonomy and intelligence to the UAS will help overcome this hurdle and expand its use improving safety and efficiency. The exponential increase in computing resources and the availability of large amount of data in this digital era has led to the resurgence of machine learning from its last winter. Therefore, in this chapter, we discuss how some of the advances in machine learning, specifically deep learning and reinforcement learning can be leveraged to develop next-generation autonomous UAS. We first begin motivating this chapter by discussing the application, challenges, and opportunities of the current UAS in the introductory section. We then provide an overview of some of the key deep learning and reinforcement learning techniques discussed throughout this chapter. A key area of focus that will be essential to enable autonomy to UAS is computer vision. Accordingly, we discuss how deep learning approaches have been used to accomplish some of the basic tasks that contribute to providing UAS autonomy. Then we discuss how reinforcement learning is explored for using this information to provide autonomous control and navigation for UAS. Next, we provide the reader with directions to choose appropriate simulation suites and hardware platforms that will help to rapidly prototype novel machine learning based solutions for UAS. We additionally discuss the open problems and challenges pertaining to each aspect of developing autonomous UAS solutions to shine light on potential research areas.
Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.
In a move that caused a ripple effect across the Middle East, Iranian General Qassem Soleimani was killed in a US drone strike near Baghdad's international airport on January 3. On that day, the Pentagon announced the attack was carried out "at the direction of the president". In a new report examining the legality of armed drones and the Soleimani killing in particular, Agnes Callamard, UN special rapporteur on extrajudicial and arbitrary killings, said the US raid that killed Soleimani was "unlawful". Callamard presented her report at the Human Rights Council in Geneva on Thursday. The United States, which is not a member after quitting the council in 2018, rejected the report saying it gave "a pass to terrorists". In Callamard's view, the consequences of targeted killings by armed drones have been neglected by states.