The US Army wants to develop small drones to automatically spot, identify and target vehicles and people. It may allow faster responses to threats, but it could also be a step towards autonomous drones that attack targets without human oversight. The project will use machine-learning algorithms, such as neural networks, to equip drones as small as consumer quadcopters with artificial intelligence.
It's been a couple of years since AI-controlled bots fragged each other in an epic Doom deathmatch. Now, EA's Search for Extraordinary Experiences Division, or SEED, has taught self-learning AI agents to play Battlefield 1. Each character in the basic match uses a model based on neural-network training to learn how to play the game via trial and error. The AI-controlled troops in the game learned how to play after watching human players, then parallel training against other bots. The AI soldiers even learned how to pick up ammo or health when they're running low, much like you or I do.
The engineering research wing of the United States Defence Department DARPA wants to use machine learning to develop new technologies for use in combat. AIs can now be used not just to recognise objects that already exist but also to devise new ones. Machine learning is already used in some engineering and design contexts, and DARPA wants to expand that usage. AIs programmed to understand fundamental physics will be set engineering challenges, giving them free rein to supply out-of-the-box solutions that will help with innovative design. It is hoped that AI will greatly assist the US government in developing new machines and components for military purposes.
Security professionals in the enterprise are facing an uphill battle to maintain control of corporate networks. Data breaches and cyberattacks are rampant, sensitive information belonging to both companies and individuals is spilling unchecked into the underbelly of the Internet, and with the emergence of state-sponsored threat actors, it is becoming more and more difficult for organizations to keep up. It is estimated the cyberattacks and online threats will cost businesses up to $6 trillion annually by 2021, up from $3 trillion in 2015. Once cyberattackers compromise an enterprise network or cloud service, information can be stolen, surveillance may be conducted, or in some cases, ransomware attacks can lock down an entire operation and hold a business to ransom. However, new technologies are entering the cybersecurity space which may help reduce the financial cost and burden on cybersecurity professionals pressed for time and often operating with limited staff and budgets.
A public report by Harvard reveals how unprepared the US Military is when it comes to the Artificial Intelligence (AI) technology known as Deep Learning. The study by Harvard's Kennedy center was published in July 2017, written by Greg Allen Taniel Chan, and was conducted with funding from IARPA. The research is titled "Artificial Intelligence and National Security". I've written about the many tribes of AI and about the use of the term AI being too ambiguous and meaning too many things to too many people. Where do we find Deep Learning in this report from Harvard?
Whether the U.S. would succeed at becoming a leader in new military technology will be determined by how fast the U.S. Department of Defense recognizes the potential of AI and advanced autonomous systems, invest in associated technologies such as advanced computing, artificial neural networks, computer vision, natural language processing, big data, machine learning, and unmanned systems and robotics, as well as find use cases for them in the battlefield, the report shows. According to data from the International Data Corporation (IDC), a global market intelligence firm based in Massachusetts, the United States is still the largest market for cognitive/AI spending, reaching almost $10 billion in revenue in 2017. China, on the other hand, is constantly looking at investing and benefiting from American innovation in the military, putting money in U.S. startups when Washington seems reluctant.
AI can already dream up imaginary celebrities, so perhaps it can help the Army imagine revolutionary new engine parts or aircraft, too. DARPA wants entrants to rethink the way complex components are designed by combining recent advances in machine learning with fundamental tenets of math and engineering. AI is increasingly being used to imagine new things, from celebrity faces to clothing (see "The GANfather: The man who's given machines the gift of imagination"). The systems being used to conjure up new ideas are still in their early stages, but they show a path forward. Machine learning is also already used in some areas of design and engineering, but the DARPA project aims to apply it more broadly, and to the crucial task of determining function and form.
In today's always connected world, losing power is more than just an annoyance. "The truth is, we rely on electricity much more than we realize," writes Sherry Hewins in her column What Could Happen in a Long-Term Power Outage? "Even if you live'off the grid' as I did for years, you are still living in a world and a society that is deeply dependent upon electricity." It is the "deep dependency" that has power companies moving toward what is called the Smart Grid, a more efficient and reliable power-distribution infrastructure. One reason these capabilities are possible is the use of two-way communications between power-distribution centers and smart equipment (smart meters and smart appliances) downstream. Enhanced communications help more than just the people who make sure electricity keeps flowing.
Google is not usually shy about touting its accomplishments in the artificial intelligence space, but one win that company does not seem particularly keen on broadcasting is a recent pilot project with the U.S. Department of Defense. In a widely quoted report this week Gizmodosaid it had learned about Google quietly partnering with the DoD on a project to help the Pentagon develop technology for analyzing footage gathered by aerial drones. Google is working with a Defense Department group called Project Maven that was established last year to accelerate the military's adoption of artificial intelligence and machine learning capabilities for analyzing big data sets. One of the primary missions for the group--as described in this memo--is to find technology the speed up the evaluation process for the massive number of photos and videos that U.S. military drones are gathering daily in support of the Defeat-ISIS campaign. The Algorithmic Warfare Cross-Functional Team (AWCFT)--as Project Maven is also known--has been tasked with providing the military with computer vision algorithms for better detecting and classifying objects in drone footage.
There is little doubt that the Defense Department needs help from Silicon Valley's biggest companies as it pursues work on artificial intelligence. The question is whether the people who work at those companies are willing to cooperate. Robert Work, a former deputy secretary of defense, announced last week that he is teaming up with the Center for a New American Security, an influential Washington think tank that specializes in national security, to create a task force of former government officials, academics and representatives from private industry. Their goal is to explore how the federal government should embrace AI technology and work better with big tech companies and other organizations. There is a growing sense of urgency to the question of what the United States is doing in artificial intelligence.