Two menacing men stand next to a white van in a field, holding remote controls. They open the van's back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. And existing defences are weak or nonexistent.
Hacking involves a lot of research. The first step is finding as much public documentation as possible, then getting access to a vehicle, and finally spending as much time poking at the vehicle's interfaces as possible. However, we have to look at this from a return on investment (ROI) standpoint too. If my costs to buy or acquire the vehicle and spend a week or two with it were high, then I might not have reason to simply use this exploit for fun, but may have motivation to hold onto it. But if I wait too long a fix for the exploit may be available, thus negating my time and effort.
Be prepared in the near future when you gaze into the blue skies to perceive a whole series of strange-looking things – no, they will not be birds, nor planes, or even superman. They may be temporarily, and in some cases startlingly mistaken as UFOs, given their bizarre and ominous appearance. But, in due course, they will become recognized as valuable objects of a new era of human-made flying machines, intended to serve a broad range of missions and objectives. Many such applications are already incorporated and well entrenched in serving essential functions for extending capabilities in our vital infrastructures such as transportation, utilities, the electric grid, agriculture, emergency services, and many others. Rapidly advancing technologies have made possible the dramatic capabilities of unmanned aerial vehicles (UAV/drones) to uniquely perform various functions that were inconceivable a mere few years ago.
Ignorance of history is a badge of honour in Silicon Valley. "The only thing that matters is the future," self-driving-car engineer Anthony Levandowski told The New Yorker in 20181. Levandowski, formerly of Google, Uber and Google's autonomous-vehicle subsidiary Waymo (and recently sentenced to 18 months in prison for stealing trade secrets), is no outlier. The gospel of'disruptive innovation' depends on the abnegation of history2. 'Move fast and break things' was Facebook's motto. Another word for this is heedlessness. And here are a few more: negligence, foolishness and blindness.
In mid-July, a UPS subsidiary called Flight Forward and the drone company Matternet started a project with the Wake Forest Baptist Health system in North Carolina. The companies' aims are decidedly futuristic: to ferry specialty medicines and protective equipment between two of the system's facilities, less than a half-mile apart. Think of it: little flying machines, zipping about at speeds up to 43 mph, bearing the goods to heal. At this point, though, the drone operations are a little, well, human. The quadcopters must be operated by specialized drone pilots, who must pass a challenging aeronautical knowledge test to get their licenses.
Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.
Because in military situations, as well as for self-driving cars, information must be processed faster than humans can achieve, determination of context computationally, also known as situational assessment, is increasingly important. In this article, we introduce the topic of context, and we discuss what is known about the heretofore intractable research problem on the effects of interdependence, present in the best of human teams; we close by proposing that interdependence must be mastered mathematically to operate human-machine teams efficiently, to advance theory, and to make the machine actions directed by AI explainable to team members and society. The special topic articles in this issue and a subsequent issue of AI Magazine review ongoing mature research and operational programs that address context for human-machine teams. In 1983, William Lawless blew the whistle on Department of Energy (DOE) mismanagement of military radioactive wastes. After his PhD, he joined DOE's citizen advisory board at its Savannah River Site where he coauthored over 100 recommendations on its cleanup.
When a bald eagle tangled unexpectedly with a government drone last month in Michigan, it won, emerging from the scene unscathed. Officials say it is somewhere in Lake Michigan. The Michigan Department of Environment, Great Lakes and Energy disclosed the attack on Thursday, almost one month after the eagle sent the $950 drone into the Great Lake. The trouble began when Hunter King, an environmental quality analyst with the department, sent a drone over Michigan's Upper Peninsula to map shoreline erosion, the department said. Delta flight returns to Austin airport after striking what may have been birds or a drone, officials say His drone's reception started to sputter, so he commanded it to return home.