Military


Perspecta Labs to Provide Advanced Photonic Edge Artificial Intelligence Compact Hardware Research for DARPA

#artificialintelligence

Perspecta Inc. announced that its innovative applied research arm, Perspecta Labs, was awarded a prime contract from the U.S. Defense Advanced Research Projects Agency (DARPA) to provide Photonic Edge AI Compact Hardware (PEACH) research under DARPA's Artificial Intelligence (AI) Exploration program. The contract, which represents new work for the company, has a total value of $1 million and work will be performed over 18 months. The goal of the PEACH program is to research and develop novel AI processing architectures in combination with innovative photonic hardware to enable breakthrough AI functionality with significant reduction in hardware complexity, latency and power consumption. Perspecta Labs will create a novel multiple-loop, delay-line reservoir computing architecture, an algorithm for specific emitter identification, and a scalable prototype hardware design in combination with innovative photonic hardware. "Perspecta Labs will draw on its rich portfolio of research and development in AI, photonics, radio frequency (RF) analytics, and systems engineering to deliver this work," said Petros Mouchtaris, Ph.D., president of Perspecta Labs.


Face Recognition Lets Palestinians Cross Israeli Checkposts Fast, But Raises Concerns

NPR Technology

A Palestinian man uses a biometric gate as he crosses into Israel at the Qalandia crossing in Jerusalem in July. Israel's military has invested tens of millions of dollars to upgrade West Bank crossings and ease entry for Palestinian workers. But critics slam the military's use of facial recognition technology as problematic. A Palestinian man uses a biometric gate as he crosses into Israel at the Qalandia crossing in Jerusalem in July. Israel's military has invested tens of millions of dollars to upgrade West Bank crossings and ease entry for Palestinian workers.


Amazon and Microsoft are putting world at risk with killer AI, study says

The Japan Times

WASHINGTON – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons. Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future. "Why are companies like Microsoft and Amazon not denying that they're currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?" The use of AI to allow weapon systems to autonomously select and attack targets has sparked ethical debates in recent years, with critics warning they would jeopardize international security and herald a third revolution in warfare after gunpowder and the atomic bomb. A panel of government experts debated policy options regarding lethal autonomous weapons at a meeting of the United Nations Convention on Certain Conventional Weapons in Geneva on Wednesday.


Will we ever control the world with our minds?

#artificialintelligence

Science-fiction can sometimes be a good guide to the future. In the film Upgrade (2018) Grey Trace, the main character, is shot in the neck. His wife is shot dead. Trace wakes up to discover that not only has he lost his wife, but he now faces a future as a wheelchair-bound quadriplegic. He is implanted with a computer chip called Stem designed by famous tech innovator Eron Keen – any similarity with Elon Musk must be coincidental – which will let him walk again.


Know How Smarter Artificial Intelligence Is Battling against Insurance Fraud

#artificialintelligence

Artificial intelligence solutions are now essential weapons in the insurers' battle against fraud. FREMONT, CA: The insurance industry is held responsible for a mass of sensitive data concerning both its customers and employees. Any data breach in an insurance firm could compromise the personal information of multiple users in no time. But insurers now have the option of attaining better cybersecurity posture by utilizing groundbreaking technologies available to them. Artificial Intelligence (AI) among those, is truly reforming insurance systems by making it more secure and enhancing the interaction between humans and machines.


A reality check on the role of machine learning in cybersecurity

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. Cybersecurity, a huge industry worth over $100 billion, is regularly subject to buzzwords. Cybersecurity companies often (pretend) to use new state-of-the-art technologies to attract customers and sell their solutions. Naturally, with artificial intelligence being in one of its craziest hype cycles, we're seeing plenty of solutions that claim to use machine learning, deep learning and other AI-related technologies to automatically secure the networks and digital assets of their clients. But contrary to what many companies profess, machine learning is not a silver bullet that will automatically protect individuals and organizations against security threats, says Ilia Kolochenko, CEO of ImmuniWeb, a company that uses AI to test the security of web and mobile applications.


Explainable AI: Why visualizing neural networks is important

#artificialintelligence

Last week, researchers from OpenAI and Google introduced Activation Atlases, a tool that helps make sense of the inner workings of neural networks by visualizing how they see and classify different objects. At first glance, Activation Atlases is an amusing tool helps you see the world through the eyes of AI models. But it also one of the many important efforts that are helping explain decisions made by neural networks, one of the greatest challenges of the AI industry and an important hurdle in trusting AI in critical tasks. Artificial intelligence, or namely its popular subset deep learning, is far from the only kind of software we're using. We've been using software in different fields for decades.


What's the best way for the Pentagon to invest in artificial intelligence?

#artificialintelligence

A deep dive into the numbers shows an early emphasis on basic research. The Defense Advanced Research Projects Agency's budget request includes $138 million for advanced land systems technology, up from $109 million in fiscal 2019. That program includes research into urban reconnaissance and AI-driven subterranean operations. DARPA's budget also includes $10 million for the Highly Networked Dissemination of Relevant Data Project, a situational awareness tool, as well as $161 million for the AI Human Machine Symbiosis Project, up from $97 million.


Role of Artificial Intelligence in Data Security

#artificialintelligence

Just the way the deficiency of vitamin and poor hygiene can make the human body fell seriously ill, the deficient access points and lack of hygiene in cyber environments can lead to cyberattacks in the hyper-connected workplaces. The likelihood of cyberattacks is increasing with continuous ballooning of the data feeding into the network. It's a sign, the organizations are living in the fear of becoming a victim of the cyberattacks and willing to spend big bundles on cybersecurity tools and services. According to IDC research, "The organizations will spend $101.6 billion on cybersecurity software, services, and hardware by 2020." The leading organizations are integrating the tens of security products in the environment, but yet, they afraid of being exposed and vulnerable.


US Army Developing AI-Guided Long-Range Smart Artillery Shell

#artificialintelligence

The Cannon-Delivered Area Effects Munitions (C-DAEM) is a new 155-millimeter artillery round in development for the Army's M777 howitzer, M109A6 Paladin self-propelled howitzer and new XM1299 self-propelled howitzer. The high-tech shell will be able to guide itself toward its intended target, even in areas where GPS is jammed by enemy forces. The munition, which has a 43-mile range, will take more than a minute to reach its target, and can slow down and guide itself on the way. By doing so, it makes it easier for the Army to hit targets that move around, like vehicles and infantry - although it can't hit a moving target yet. Popular Mechanics notes that C-DAEM will replace the dual purpose improved conventional munition (DPICM), a type of cluster munition that made up for a lack of precision accuracy by scattering bomblets above the battlefield, ensuring it would at least do some damage to its target even if it didn't hit it directly.