I'll be moderating today's press briefing. Today it's my pleasure to introduce the director of the Department of Defense [Joint] Artificial Intelligence Center (JAIC), Lieutenant General Michael Groen. Lieutenant General Groan is joined today by Dr. Jane Pinelis, who is the Chief of Test and Evaluation for the JAIC, and Ms. Alka Patel, who is the Chief of Responsible AI (Artificial Intelligence). We'll begin today's press briefing with an opening statement followed by questions. We've got people out in the line. And I think we'll be able to get to everybody today. LIEUTENANT GENERAL MICHAEL S. GROEN: Thank you, Arlo. And greetings to the members of the Defense Press Corps, really glad to be here with you today. I hope many of you got the opportunity to listen in to at least some of the AI symposium and technology exchange that we had this week. This week, it was our second annual symposium. We have over 1,400 participants in three days of virtualized content. I want to say thank you, ...
President Joe Biden's decision to elevate the director of the Office of Science and Technology Policy to a Cabinet-level position underscores the importance of artificial intelligence in America's future. His selection of Alondra Nelson to be deputy director of OSTP shows that unlocking AI's potential will be done with a focus on racial and gender equity. Nelson, a Black woman whose research focuses on the intersection of science, technology and social inequality, has said that technologies like AI "reveal and reflect even more about the complex and sometimes dangerous social architecture that lies beneath the scientific progress that we pursue." There's no doubt that ethics must be foundational to the design, development and acquisition of AI capabilities, and that government agencies should embed trustworthy AI as part of a holistic strategy to transform the way government operates. Agency leaders can start by identifying areas where AI can transform their internal operations and improve their public-facing mission services with minimal risk of bias.
Z Advanced Computing, Inc. (ZAC), the pioneer Cognitive Explainable-AI (Artificial Intelligence) (Cognitive XAI) software startup, has made AI and Machine Learning (ML) breakthroughs: ZAC has achieved 3D Image Recognition using only a few training samples, and using only an average laptop with low power CPU, for both training and recognition, for the US Air Force (USAF). This is in sharp contrast to the other algorithms in industry that require thousands to billions of samples, being trained on large GPU servers. "ZAC requires much less computing power and much less electrical power to run, which is great for mobile and edge computing, as well as environment, with less Carbon footprint," emphasized Dr. Saied Tadayon, CTO of ZAC. ZAC is the first to demonstrate the novel and superior algorithms Cognition-based Explainable-AI (XAI), where various attributes and details of 3D (three dimensional) objects are recognized from any view or angle. "You cannot do this task with the other algorithms, such as Deep Convolutional Neural Networks (CNN) or ResNets, even with an extremely large number of training samples, on GPU servers. That's basically hitting the limitations of CNNs or Neural Nets, which all other companies are using now," said Dr. Bijan Tadayon, CEO of ZAC.
First of all, I want to state for the record that I have never played a video game that involved violence or war. I think the last time I played a "video game" was Flight Simulator. As a result, I suspect some readers are much more familiar with intensive and fanciful warfare than I am. Still, recently, I've been part of discussions with the Department of Defense and organizations that advise, consult and criticize the DoD on the topic AI in warfare. It is a complicated issue to introduce AI ethics with the violence and killing of war.
An AI-controlled fighter jet will battle a US Air Force pilot in a simulated dogfight next week -- and you can watch the action online. The clash is the culmination of DARPA's AlphaDogfight competition, which the Pentagon's "mad science" wing launched to increase trust in AI-assisted combat. DARPA hopes this will raise support for using algorithms in simpler aerial operations, so pilots can focus on more challenging tasks, such as organizing teams of unmanned aircraft across the battlespace. The three-day event was scheduled to take place in-person in Las Vegas from August 18-20, but the COVID-19 pandemic led DARPA to move the event online. Attend the tech festival of the year and get your super early bird ticket now!
This long-running series should provide you with a very basic understanding of what AI is, what it can do, and how it works. In addition to the article you're currently reading, the guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, artificial general intelligence, the difference between video game AI and real AI, the difference between human and machine intelligence, and ethics. In this edition of the guide, we'll take a glance at global AI policy. In the coming years it will be important for everyone to understand what those differences can mean for our safety and privacy. Artificial intelligence has traditionally been swept in with other technologies when it comes to policy and regulation.
This long-running series should provide you with a very basic understanding of what AI is, what it can do, and how it works. In addition to the article you're currently reading, the guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, artificial general intelligence, the difference between video game AI and real AI, the difference between human and machine intelligence, and ethics. In this edition of the guide, we'll take a glance at global AI policy. Here's how AI can improve your company's customer journey In the coming years it will be important for everyone to understand what those differences can mean for our safety and privacy. Artificial intelligence has traditionally been swept in with other technologies when it comes to policy and regulation.
The idea of responsible artificial intelligence (AI) is spreading far and wide across the U.S. Department of Defense and its surrounding ecosystem. There's been the new data strategy, the responsible AI memo and the newly approved JADC2 strategy that has a massive data component. "The DoD is very much accelerating its path," said Thomas Kenney, chief data officer and director of SOF AI for U.S. Special Operations Command, during day two of the virtual AFCEA/GMU Critical Issues in C4I Symposium. "Our chief data officer at the DoD, David Spirk, is doing herculean work to help the entire DoD move forward," he added. "That new data strategy, as we think about data sharing, is absolutely essential because it creates the conditions for success where we can open doors to data we maybe didn't have access to before or maybe data we didn't even know existed," Kenney said.
As the Pentagon rapidly builds and adopts artificial intelligence tools, Deputy Defense Secretary Kathleen Hicks said military leaders increasingly are worried about a second-hand problem: AI safety. AI safety broadly refers to making sure that artificial intelligence programs don't wind up causing problems, no matter whether they were based on corrupted or incomplete data, were poorly designed, or were hacked by attackers. AI safety is often seen as an afterthought as companies rush to build, sell, and adopt machine learning tools. But the Department of Defense is obligated to put a little more attention into the issue, Hicks said Monday at the Defense One Tech Summit. "As you look at testing evaluation and validation and verification approaches, these are areas where we know--whether you're in the commercial sector, the government sector, and certainly if you look abroad, there is not a lot happening in terms of safety," she said.
The Association for the Advancement of Artificial Intelligence's 2021 Spring Symposium Series was held virtually from March 22-24, 2021. There were ten symposia in the program: Applied AI in Healthcare: Safety, Community, and the Environment, Artificial Intelligence for K-12 Education, Artificial Intelligence for Synthetic Biology, Challenges and Opportunities for Multi-Agent Reinforcement Learning, Combining Machine Learning and Knowledge Engineering, Combining Machine Learning with Physical Sciences, Implementing AI Ethics, Leveraging Systems Engineering to Realize Synergistic AI/Machine-Learning Capabilities, Machine Learning for Mobile Robot Navigation in the Wild, and Survival Prediction: Algorithms, Challenges and Applications. This report contains summaries of all the symposia. The two-day international virtual symposium included invited speakers, presenters of research papers, and breakout discussions from attendees around the world. Registrants were from different countries/cities including the US, Canada, Melbourne, Paris, Berlin, Lisbon, Beijing, Central America, Amsterdam, and Switzerland. We had active discussions about solving health-related, real-world issues in various emerging, ongoing, and underrepresented areas using innovative technologies including Artificial Intelligence and Robotics. We primarily focused on AI-assisted and robot-assisted healthcare, with specific focus on areas of improving safety, the community, and the environment through the latest technological advances in our respective fields. The day was kicked off by Raj Puri, Physician and Director of Strategic Health Initiatives & Innovation at Stanford University spoke about a novel, automated sentinel surveillance system his team built mitigating COVID and its integration into their public-facing dashboard of clinical data and metrics. Selected paper presentations during both days were wide ranging including talks from Oliver Bendel, a Professor from Switzerland and his Swiss colleague, Alina Gasser discussing co-robots in care and support, providing the latest information on technologies relating to human-robot interaction and communication. Yizheng Zhao, Associate Professor at Nanjing University and her colleagues from China discussed views of ontologies with applications to logical difference computation in the healthcare sector. Pooria Ghadiri from McGill University, Montreal, Canada discussed his research relating to AI enhancements in health-care delivery for adolescents with mental health problems in the primary care setting.