Z Advanced Computing, Inc. (ZAC) of Potomac, MD announced on August 27 that it is funded by the US Air Force, to use ZAC's detailed 3D image recognition technology, based on Explainable-AI, for drones (unmanned aerial vehicle or UAV) for aerial image/object recognition. ZAC is the first to demonstrate Explainable-AI, where various attributes and details of 3D (three dimensional) objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," said Dr. Saied Tadayon, CTO of ZAC. "For complex tasks, such as drone vision, you need ZAC's superior technology to handle detailed 3D image recognition." "You cannot do this with the other techniques, such as Deep Convolutional Neural Networks, even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," continued Dr. Bijan Tadayon, CEO of ZAC.
Drone images accumulate much faster than they can be analyzed. Researchers have developed a new approach that combines crowdsourcing and machine learning to speed up the process. Who would win in a real-life game of "Where's Waldo," humans or computers? A recent study suggests that when speed and accuracy are critical, an approach combing both human and machine intelligence would take the prize. With drones being used to monitor everything natural disaster sites, pollution, or wildlife populations, analyzing drone images in real-time has become a critically important big data challenge.
Russian President Vladimir Putin warned Friday (Sept. AI development "raises colossal opportunities and threats that are difficult to predict now," Putin said in a lecture to students, warning that "it would be strongly undesirable if someone wins a monopolist position." Future wars will be fought by autonomous drones, Putin suggested, and "when one party's drones are destroyed by drones of another, it will have no other choice but to surrender." U.N. urged to address lethal autonomous weapons AI experts worldwide are also concerned. On August 20, 116 founders of robotics and artificial intelligence companies from 26 countries, including Elon Musk and Google DeepMind's Mustafa Suleyman, signed an open letter asking the United Nations to "urgently address the challenge of lethal autonomous weapons (often called'killer robots') and ban their use internationally."
Leaders in the fields of AI and robotics, including Elon Musk and Google DeepMind's Mustafa Suleyman, have signed a letter calling on the United Nations to ban lethal autonomous weapons, otherwise known as "killer robots." In their petition, the group states that the development of such technology would usher in a "third revolution in warfare," that could equal the invention of gunpowder and nuclear weapons. "Once developed, [autonomous weapons] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend," write the signatories. "These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways." The letter is signed by the founders of 116 AI and robotics companies from 26 countries, and was published this weekend ahead of the International Joint Conference on Artificial Intelligence (IJCAI).
More than 2,400 researchers, scientists, engineers, entrepreneurs and others have signed a pledge – organised by the Future of Life Institute (FLI) – promising not to develop lethal autonomous weapons. In addition to many prominent individuals, the list of signatories also includes over 160 AI-related firms and organisations from around the world – such as Google DeepMind, XPRIZE Foundation, University College London, the European Association for AI (EurAI), Swedish AI Society (SAIS), ClearPath Robotics and OTTO Motors. The pledge is being announced today at the annual International Joint Conference on Artificial Intelligence (IJCAI) in Sweden, which draws over 5,000 of the world's leading AI researchers. Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.