At twilight on New Year's Eve, 2020, Placido Montoya, 35, a plumber from Fort Morgan, Colorado, was driving to work. Ahead of him he noticed blinking lights in the sky. He'd heard rumours of mysterious drones, whispers in his local community, but now he was seeing them with his own eyes. In the early morning gloom, it was hard to make out how big the lights were and how many were hovering above him. But one thing was clear to Montoya: he needed to give chase.
Boston Dynamics robotic dog Spot was one of several robots tested by the French army during training sessions at a military school in the northwest of France, The Verge and France Ouest have reported. It was used during a two-day training session with the aim of "measuring the added value of robots in combat action," said school commandant Jean-Baptiste Cavalier. The exercises aimed to get students thinking about how robots might be deployed in future combat situations. The students designed three offensive and defensive missions, with Spot used primarily for reconnaissance. The scenarios were performed by students first without and then with the aid of the robots. Other bots deployed were a remote-controlled tank-like vehicle called OPTIO-X20 armed with a cannon and Barakuda, an armor-plated wheeled drone designed to provide cover to advancing soldiers.
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. A former Air Force intelligence analyst pleaded guilty Wednesday to leaking classified documents to a reporter about military drone strikes against al-Qaida and other terrorist targets. The guilty plea from Daniel Hale, 33, of Nashville, Tennessee, comes just days before he was slated to go on trial in federal court in Alexandria, Virginia, for violating the World War I-era Espionage Act. Hale admitted leaking roughly a dozen secret and top-secret documents to a reporter in 2014 and 2015, when he was working for a contractor as an analyst at the National Geospatial-Intelligence Agency (NGA).
Artificial Intelligence is already part of our lives, and as the technology matures it will play a key role in future wars. The accuracy and precision of today's weapons are steadily forcing contemporary battlefields to empty of human combatants. As more and more sensors fill the battlespace, sending vast amounts of data back to analysts, humans struggle to make sense of the mountain of information gathered. This is where artificial intelligence (AI) comes in – learning algorithms that thrive off big data; in fact, the more data these systems analyse, the more accurate they can be. In short, AI is the ability for a system to "think" in a limited way, working specifically on problems normally associated with human intelligence, such as pattern and speech recognition, translation and decision-making.
Agha, Ali, Otsu, Kyohei, Morrell, Benjamin, Fan, David D., Thakker, Rohan, Santamaria-Navarro, Angel, Kim, Sung-Kyun, Bouman, Amanda, Lei, Xianmei, Edlund, Jeffrey, Ginting, Muhammad Fadhil, Ebadi, Kamak, Anderson, Matthew, Pailevanian, Torkom, Terry, Edward, Wolf, Michael, Tagliabue, Andrea, Vaquero, Tiago Stegun, Palieri, Matteo, Tepsuporn, Scott, Chang, Yun, Kalantari, Arash, Chavez, Fernando, Lopez, Brett, Funabiki, Nobuhiro, Miles, Gregory, Touma, Thomas, Buscicchio, Alessandro, Tordesillas, Jesus, Alatur, Nikhilesh, Nash, Jeremy, Walsh, William, Jung, Sunggoo, Lee, Hanseob, Kanellakis, Christoforos, Mayo, John, Harper, Scott, Kaufmann, Marcel, Dixit, Anushri, Correa, Gustavo, Lee, Carlyn, Gao, Jay, Merewether, Gene, Maldonado-Contreras, Jairo, Salhotra, Gautam, Da Silva, Maira Saboia, Ramtoula, Benjamin, Fakoorian, Seyed, Hatteland, Alexander, Kim, Taeyeon, Bartlett, Tara, Stephens, Alex, Kim, Leon, Bergh, Chuck, Heiden, Eric, Lew, Thomas, Cauligi, Abhishek, Heywood, Tristan, Kramer, Andrew, Leopold, Henry A., Choi, Chris, Daftry, Shreyansh, Toupet, Olivier, Wee, Inhwan, Thakur, Abhishek, Feras, Micah, Beltrame, Giovanni, Nikolakopoulos, George, Shim, David, Carlone, Luca, Burdick, Joel
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved 2nd and 1st place, respectively. We also discuss CoSTAR's demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including: (i) geometric and semantic environment mapping; (ii) a multi-modal positioning system; (iii) traversability analysis and local planning; (iv) global motion planning and exploration behavior; (i) risk-aware mission planning; (vi) networking and decentralized reasoning; and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g. wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.
This work provides a starting point for researchers interested in gaining a deeper understanding of the big picture of artificial intelligence (AI). To this end, a narrative is conveyed that allows the reader to develop an objective view on current developments that is free from false promises that dominate public communication. An essential takeaway for the reader is that AI must be understood as an umbrella term encompassing a plethora of different methods, schools of thought, and their respective historical movements. Consequently, a bottom-up strategy is pursued in which the field of AI is introduced by presenting various aspects that are characteristic of the subject. This paper is structured in three parts: (i) Discussion of current trends revealing false public narratives, (ii) an introduction to the history of AI focusing on recurring patterns and main characteristics, and (iii) a critical discussion on the limitations of current methods in the context of the potential emergence of a strong(er) AI. It should be noted that this work does not cover any of these aspects holistically; rather, the content addressed is a selection made by the author and subject to a didactic strategy.
Delseny, Hervé, Gabreau, Christophe, Gauffriau, Adrien, Beaudouin, Bernard, Ponsolle, Ludovic, Alecu, Lucian, Bonnin, Hugues, Beltran, Brice, Duchel, Didier, Ginestet, Jean-Brice, Hervieu, Alexandre, Martinez, Ghilaine, Pasquet, Sylvain, Delmas, Kevin, Pagetti, Claire, Gabriel, Jean-Marc, Chapdelaine, Camille, Picard, Sylvaine, Damour, Mathieu, Cappi, Cyril, Gardès, Laurent, De Grancey, Florence, Jenn, Eric, Lefevre, Baptiste, Flandin, Gregory, Gerchinovitz, Sébastien, Mamalet, Franck, Albore, Alexandre
Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.
This story was originally published by Undark and is reproduced here as part of the Climate Desk collaboration. Deep in the Mojave Desert, 60 miles from the city of Barstow, is the Slash X Ranch Cafe, a former ranch where dirt bike riders and ATV adventurers can drink beer and eat burgers with fellow daredevils speeding across the desert. Displayed on a wall alongside trucker caps and taxidermy is a plaque that memorializes the 2004 DARPA Grand Challenge, a 142-mile race whose starting point was at Slash X Ranch Cafe. It was the first race in the world without human drivers. Instead, it featured the fever-dream inventions -- robotic motorcycles, monster Humvees -- of a handful of software engineers who were hellbent on creating fully autonomous vehicles and winning the million-dollar prize offered by the Defense Department's Defense Advanced Research Projects Agency.
Former Secretary of the Navy J. William Middendorf II, of Little Compton, lays out the threat posed by the Chinese Communist Party in his recent book, "The Great Nightfall." With the emerging priority of artificial intelligence (AI), China is shifting away from a strategy of neutralizing or destroying an enemy's conventional military assets -- its planes, ships and army units. AI strategy is now evolving into dominating what are termed adversaries' "systems-of-systems" -- the combinations of all their intelligence and conventional military assets. What China would attempt first is to disable all of its adversaries' information networks that bind their military systems and assets. It would destroy individual elements of these now-disaggregated forces, probably with missiles and naval strikes.
Zhang, Daniel, Mishra, Saurabh, Brynjolfsson, Erik, Etchemendy, John, Ganguli, Deep, Grosz, Barbara, Lyons, Terah, Manyika, James, Niebles, Juan Carlos, Sellitto, Michael, Shoham, Yoav, Clark, Jack, Perrault, Raymond
Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.