Results


The Pentagon Inches Toward Letting AI Control Weapons

WIRED

Last August, several dozen military drones and tank-like robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings. So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find--and eliminate--enemy combatants when necessary. The mission was just an exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon; the robots were armed with nothing more lethal than radio transmitters designed to simulate interactions with both friendly and enemy robots.


Israel shared Iranian General Soleimani's cell phones with US intelligence before drone strike: report

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Israel shared three cell phone numbers used by Qasem Soleimani with U.S. intelligence in the hours before American drones unleashed Hellfire missiles on the Iranian general last year, Yahoo News reported Saturday. The revelation sheds new light on the role that Israel played in the killing of Soleimani, who the State Department says was responsible for hundreds of U.S. troop deaths as the head of the Revolutionary Guard's elite Quds Force. The drone strike occurred shortly after midnight on Jan. 2, 2020, as Soleimani and his entourage were leaving Baghdad's international airport.


Artificial intelligence and war without humans

#artificialintelligence

It's a simple fact, says General John "Mike" Murray, we're going to have to learn to trust artificial intelligence in the battlefield. And that means, the rules governing human control over artificial intelligence might need to be relaxed. Speaking from Austin, Texas, at The Future Character of War and the Law of Armed Conflict online event, Murray provided a future battle scenario involving the rapid advance of artificial intelligence in the US military and the ethical challenges it presents. "If you think about things like a swarm of, let's say a hundred semi-autonomous or autonomous drones, some lethal, some sensing, some jamming, some in command and control -- think back to the closing ceremony of the Seoul Olympics. "Is it within a human's ability to pick out which ones have to be engaged and then make 100 individual engagement decisions against a drone swarm?" said Murray, Commander, Army Future Command (AFC). "And is it even necessary to have it a human in the loop, if you're talking about affects against an unmanned platform or against a machine.


Trevor Paglen warns about the dangers of Artificial Intelligence in new documentary Unseen Skies

#artificialintelligence

A photograph of the sky by Trevor Paglen can look like a massive abstraction, except for a tiny speck, a surveillance drone, spotted like a malignant dot on a chest x-ray. His images of secluded military sites in Nevada can also ooze with colour from the churning heat and dust. In the new documentary film Unseen Skies, directed by Yaara Bou Melhem, Paglen calls the effect "impressionistic haze". Photographing those places, often from miles away (or farther), is about "seeing and not seeing at the same time," Paglen says. "For me those images were about capturing that paradox."


No Longer Sci-Fi: Laser Guns Are Coming to the U.S. Military

#artificialintelligence

Enemy drone attack threats are a key part of the inspiration for newer kinds of laser weapons because they can incinerate drones without generating large amounts of explosive fragmentation. Moreover, newer lasers can scale attacks to align with the target and desired combat effect and, perhaps most of all, travel at the speed of light to destroy drones quickly, ideally before they are able to strike. Attacking drone swarms may be approaching for attack so quickly that kinetic responses such as interceptor missile fire control systems may be challenged in certain respects, depending upon the extent of artificial intelligence (AI)-enabled target recognition technology and computer automation. The question of scaling lasers to optimize power input for counter-drone strikes is addressed in a recent essay from May of last year called "Testing the Efficiency of Laser Technology to Destroy Rogue Drones," in the Security & Defense Quarterly from War Studies University. The essay describes innovative experimental methods of "incorporating a laser module and groups of optical lenses to focus the power in one point to carbonize any target."


Hitting the Books: How IBM's metadata research made US drones even deadlier

Engadget

If there's one thing the United States military gets right, it's lethality. Yet even once the US military has you in its sights, it may not know who you actually are -- such are, these so-called "signature strikes" -- even as that wrathful finger of God is called down from upon on high. As Kate Crawford, Microsoft Research principal and co-founder of the AI Now Institute at NYU, lays out in this fascinating excerpt from her new book, Atlas of AI, the military-industrial complex is alive and well and now leveraging metadata surveillance scores derived by IBM to decide which home/commute/gender reveal party to drone strike next. And if you think that same insidious technology isn't already trickling down to infest the domestic economy, I have a credit score to sell you. Excerpted from Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford, published by Yale University Press.


Attack of the drones: the mystery of disappearing swarms in the US midwest

The Guardian

At twilight on New Year's Eve, 2020, Placido Montoya, 35, a plumber from Fort Morgan, Colorado, was driving to work. Ahead of him he noticed blinking lights in the sky. He'd heard rumours of mysterious drones, whispers in his local community, but now he was seeing them with his own eyes. In the early morning gloom, it was hard to make out how big the lights were and how many were hovering above him. But one thing was clear to Montoya: he needed to give chase.


Analyst pleads to leaking secrets about drone program

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. A former Air Force intelligence analyst pleaded guilty Wednesday to leaking classified documents to a reporter about military drone strikes against al-Qaida and other terrorist targets. The guilty plea from Daniel Hale, 33, of Nashville, Tennessee, comes just days before he was slated to go on trial in federal court in Alexandria, Virginia, for violating the World War I-era Espionage Act. Hale admitted leaking roughly a dozen secret and top-secret documents to a reporter in 2014 and 2015, when he was working for a contractor as an analyst at the National Geospatial-Intelligence Agency (NGA).


Adding AI to Autonomous Weapons Increases Risks to Civilians in Armed Conflict

#artificialintelligence

Earlier this month, a high-level, congressionally mandated commission released its long-awaited recommendations for how the United States should approach artificial intelligence (AI) for national security. The recommendations were part of a nearly 800-page report from the National Security Commission on AI (NSCAI) that advocated for the use of AI but also highlighted important conclusions on key risks posed by AI-enabled and autonomous weapons, particularly the dangers of unintended escalation of conflict. The commission identified these risks as stemming from several factors, including system failures, unknown interactions between these systems in armed conflict, challenges in human-machine interaction, as well as an increasing speed of warfare that reduces the time and space for de-escalation. These same factors also contribute to the inherent unpredictability in autonomous weapons, whether AI-enabled or not. From a humanitarian and legal perspective, the NSCAI could have explored in more depth the risks such unpredictability poses to civilians in conflict zones and to international law.


NeBula: Quest for Robotic Autonomy in Challenging Environments; TEAM CoSTAR at the DARPA Subterranean Challenge

arXiv.org Artificial Intelligence

This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved 2nd and 1st place, respectively. We also discuss CoSTAR's demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including: (i) geometric and semantic environment mapping; (ii) a multi-modal positioning system; (iii) traversability analysis and local planning; (iv) global motion planning and exploration behavior; (i) risk-aware mission planning; (vi) networking and decentralized reasoning; and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g. wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.