A writer and military historian responds to Justina Ireland's "Collateral Damage." The histories of the military and technology often go hand in hand. Soldiers and military thinkers throughout the past have continually come up with new ways to fill the people over there full of holes as a means to encourage them to stop trying to do the same to their opponents. After the introduction of a new weapon or the improvement of an existing one, strategists spend their time trying to come up with the best way to deploy their forces to take advantage of the tools and/or to blunt their effectiveness by devising countermeasures. The development of the Greek phalanx helped protect soldiers from cavalry, the deployment of English longbows helped stymie large formations of enemy soldiers, new construction methods changed the shape of fortifications, line infantry helped European formations take advantage of firearms, and anti-aircraft cannons helped protect against incoming enemy aircraft.
I'll be moderating today's press briefing. Today it's my pleasure to introduce the director of the Department of Defense [Joint] Artificial Intelligence Center (JAIC), Lieutenant General Michael Groen. Lieutenant General Groan is joined today by Dr. Jane Pinelis, who is the Chief of Test and Evaluation for the JAIC, and Ms. Alka Patel, who is the Chief of Responsible AI (Artificial Intelligence). We'll begin today's press briefing with an opening statement followed by questions. We've got people out in the line. And I think we'll be able to get to everybody today. LIEUTENANT GENERAL MICHAEL S. GROEN: Thank you, Arlo. And greetings to the members of the Defense Press Corps, really glad to be here with you today. I hope many of you got the opportunity to listen in to at least some of the AI symposium and technology exchange that we had this week. This week, it was our second annual symposium. We have over 1,400 participants in three days of virtualized content. I want to say thank you, ...
President Joe Biden's decision to elevate the director of the Office of Science and Technology Policy to a Cabinet-level position underscores the importance of artificial intelligence in America's future. His selection of Alondra Nelson to be deputy director of OSTP shows that unlocking AI's potential will be done with a focus on racial and gender equity. Nelson, a Black woman whose research focuses on the intersection of science, technology and social inequality, has said that technologies like AI "reveal and reflect even more about the complex and sometimes dangerous social architecture that lies beneath the scientific progress that we pursue." There's no doubt that ethics must be foundational to the design, development and acquisition of AI capabilities, and that government agencies should embed trustworthy AI as part of a holistic strategy to transform the way government operates. Agency leaders can start by identifying areas where AI can transform their internal operations and improve their public-facing mission services with minimal risk of bias.
Z Advanced Computing, Inc. (ZAC), the pioneer Cognitive Explainable-AI (Artificial Intelligence) (Cognitive XAI) software startup, has made AI and Machine Learning (ML) breakthroughs: ZAC has achieved 3D Image Recognition using only a few training samples, and using only an average laptop with low power CPU, for both training and recognition, for the US Air Force (USAF). This is in sharp contrast to the other algorithms in industry that require thousands to billions of samples, being trained on large GPU servers. "ZAC requires much less computing power and much less electrical power to run, which is great for mobile and edge computing, as well as environment, with less Carbon footprint," emphasized Dr. Saied Tadayon, CTO of ZAC. ZAC is the first to demonstrate the novel and superior algorithms Cognition-based Explainable-AI (XAI), where various attributes and details of 3D (three dimensional) objects are recognized from any view or angle. "You cannot do this task with the other algorithms, such as Deep Convolutional Neural Networks (CNN) or ResNets, even with an extremely large number of training samples, on GPU servers. That's basically hitting the limitations of CNNs or Neural Nets, which all other companies are using now," said Dr. Bijan Tadayon, CEO of ZAC.
First of all, I want to state for the record that I have never played a video game that involved violence or war. I think the last time I played a "video game" was Flight Simulator. As a result, I suspect some readers are much more familiar with intensive and fanciful warfare than I am. Still, recently, I've been part of discussions with the Department of Defense and organizations that advise, consult and criticize the DoD on the topic AI in warfare. It is a complicated issue to introduce AI ethics with the violence and killing of war.
An AI-controlled fighter jet will battle a US Air Force pilot in a simulated dogfight next week -- and you can watch the action online. The clash is the culmination of DARPA's AlphaDogfight competition, which the Pentagon's "mad science" wing launched to increase trust in AI-assisted combat. DARPA hopes this will raise support for using algorithms in simpler aerial operations, so pilots can focus on more challenging tasks, such as organizing teams of unmanned aircraft across the battlespace. The three-day event was scheduled to take place in-person in Las Vegas from August 18-20, but the COVID-19 pandemic led DARPA to move the event online. Attend the tech festival of the year and get your super early bird ticket now!
This long-running series should provide you with a very basic understanding of what AI is, what it can do, and how it works. In addition to the article you're currently reading, the guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, artificial general intelligence, the difference between video game AI and real AI, the difference between human and machine intelligence, and ethics. In this edition of the guide, we'll take a glance at global AI policy. In the coming years it will be important for everyone to understand what those differences can mean for our safety and privacy. Artificial intelligence has traditionally been swept in with other technologies when it comes to policy and regulation.
This long-running series should provide you with a very basic understanding of what AI is, what it can do, and how it works. In addition to the article you're currently reading, the guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, artificial general intelligence, the difference between video game AI and real AI, the difference between human and machine intelligence, and ethics. In this edition of the guide, we'll take a glance at global AI policy. Here's how AI can improve your company's customer journey In the coming years it will be important for everyone to understand what those differences can mean for our safety and privacy. Artificial intelligence has traditionally been swept in with other technologies when it comes to policy and regulation.
The idea of responsible artificial intelligence (AI) is spreading far and wide across the U.S. Department of Defense and its surrounding ecosystem. There's been the new data strategy, the responsible AI memo and the newly approved JADC2 strategy that has a massive data component. "The DoD is very much accelerating its path," said Thomas Kenney, chief data officer and director of SOF AI for U.S. Special Operations Command, during day two of the virtual AFCEA/GMU Critical Issues in C4I Symposium. "Our chief data officer at the DoD, David Spirk, is doing herculean work to help the entire DoD move forward," he added. "That new data strategy, as we think about data sharing, is absolutely essential because it creates the conditions for success where we can open doors to data we maybe didn't have access to before or maybe data we didn't even know existed," Kenney said.