Goto

Collaborating Authors

Results


'Machines set loose to slaughter': the dangerous rise of military AI

#artificialintelligence

Two menacing men stand next to a white van in a field, holding remote controls. They open the van's back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. And existing defences are weak or nonexistent.


Drones – the New Critical Infrastructure

#artificialintelligence

Be prepared in the near future when you gaze into the blue skies to perceive a whole series of strange-looking things – no, they will not be birds, nor planes, or even superman. They may be temporarily, and in some cases startlingly mistaken as UFOs, given their bizarre and ominous appearance. But, in due course, they will become recognized as valuable objects of a new era of human-made flying machines, intended to serve a broad range of missions and objectives. Many such applications are already incorporated and well entrenched in serving essential functions for extending capabilities in our vital infrastructures such as transportation, utilities, the electric grid, agriculture, emergency services, and many others. Rapidly advancing technologies have made possible the dramatic capabilities of unmanned aerial vehicles (UAV/drones) to uniquely perform various functions that were inconceivable a mere few years ago.


Scientists use big data to sway elections and predict riots -- welcome to the 1960s

Nature

Ignorance of history is a badge of honour in Silicon Valley. "The only thing that matters is the future," self-driving-car engineer Anthony Levandowski told The New Yorker in 20181. Levandowski, formerly of Google, Uber and Google's autonomous-vehicle subsidiary Waymo (and recently sentenced to 18 months in prison for stealing trade secrets), is no outlier. The gospel of'disruptive innovation' depends on the abnegation of history2. 'Move fast and break things' was Facebook's motto. Another word for this is heedlessness. And here are a few more: negligence, foolishness and blindness.


No, Amazon Won't Deliver You a Burrito by Drone Anytime Soon

WIRED

In mid-July, a UPS subsidiary called Flight Forward and the drone company Matternet started a project with the Wake Forest Baptist Health system in North Carolina. The companies' aims are decidedly futuristic: to ferry specialty medicines and protective equipment between two of the system's facilities, less than a half-mile apart. Think of it: little flying machines, zipping about at speeds up to 43 mph, bearing the goods to heal. At this point, though, the drone operations are a little, well, human. The quadcopters must be operated by specialized drone pilots, who must pass a challenging aeronautical knowledge test to get their licenses.


Human-in-the-Loop Methods for Data-Driven and Reinforcement Learning Systems

arXiv.org Artificial Intelligence

Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.


Artificial intelligence, Autonomy, and Human-Machine Teams -- Interdependence, Context, and Explainable AI

Interactive AI Magazine

Because in military situations, as well as for self-driving cars, information must be processed faster than humans can achieve, determination of context computationally, also known as situational assessment, is increasingly important. In this article, we introduce the topic of context, and we discuss what is known about the heretofore intractable research problem on the effects of interdependence, present in the best of human teams; we close by proposing that interdependence must be mastered mathematically to operate human-machine teams efficiently, to advance theory, and to make the machine actions directed by AI explainable to team members and society. The special topic articles in this issue and a subsequent issue of AI Magazine review ongoing mature research and operational programs that address context for human-machine teams. In 1983, William Lawless blew the whistle on Department of Energy (DOE) mismanagement of military radioactive wastes. After his PhD, he joined DOE's citizen advisory board at its Savannah River Site where he coauthored over 100 recommendations on its cleanup.


A bald eagle takes on a government drone. The bald eagle wins

#artificialintelligence

When a bald eagle tangled unexpectedly with a government drone last month in Michigan, it won, emerging from the scene unscathed. Officials say it is somewhere in Lake Michigan. The Michigan Department of Environment, Great Lakes and Energy disclosed the attack on Thursday, almost one month after the eagle sent the $950 drone into the Great Lake. The trouble began when Hunter King, an environmental quality analyst with the department, sent a drone over Michigan's Upper Peninsula to map shoreline erosion, the department said. Delta flight returns to Austin airport after striking what may have been birds or a drone, officials say His drone's reception started to sputter, so he commanded it to return home.


UTSA Launches Research Center to Expand Reach of Artificial Intelligence

#artificialintelligence

To explore all newsletters, click here. By signing, you agree to the terms of service and privacy policy. Self-driving cars, single-pilot commercial planes, robotic soldiers, and widespread gene editing may still be things of the future, but a new research center in San Antonio is working to bring these and other artificial intelligence innovations to life. The University of Texas at San Antonio officially launched its newest research center, the UTSA Matrix AI Consortium, on Thursday morning via a livestream kickoff event. The consortium will bring together experts studying artificial intelligence to expand the use and deployment of AI. "This initiative is a concerted effort to promote AI innovation, something I'm a big fan about these days," UTSA President Taylor Eighmy said.


AI 50: America's Most Promising Artificial Intelligence Companies

#artificialintelligence

Our second annual list highlights promising, private, U.S.-based companies that are using artificial intelligence in meaningful business-oriented ways.


Regulating human control over autonomous systems

arXiv.org Artificial Intelligence

In recent years, many sectors have experienced significant progress in automation, associated with the growing advances in artificial intelligence and machine learning. There are already automated robotic weapons, which are able to evaluate and engage with targets on their own, and there are already autonomous vehicles that do not need a human driver. It is argued that the use of increasingly autonomous systems (AS) should be guided by the policy of human control, according to which humans should execute a certain significant level of judgment over AS. While in the military sector there is a fear that AS could mean that humans lose control over life and death decisions, in the transportation domain, on the contrary, there is a strongly held view that autonomy could bring significant operational benefits by removing the need for a human driver. This article explores the notion of human control in the United States in the two domains of defense and transportation. The operationalization of emerging policies of human control results in the typology of direct and indirect human controls exercised over the use of AS. The typology helps to steer the debate away from the linguistic complexities of the term "autonomy." It identifies instead where human factors are undergoing important changes and ultimately informs about more detailed rules and standards formulation, which differ across domains, applications, and sectors.