From the battlefield to the back office, artificial intelligence has the potential to transform how the Defense Department does business in areas like increasing the speed of decision making, making sense of complex data sets and improving efficiency in back-office operations. Ensuring that AI is developed, procured and used responsibly and ethically is a top priority for the department's top leader. "As the Department of Defense embraces artificial intelligence, it is imperative that we adopt responsible behavior, processes and outcomes in a manner that reflects the department's commitment to its core set of ethical principles," Deputy Secretary of Defense Dr. Kathleen Hicks wrote in a department-wide memorandum released last week. As part of that commitment to responsible artificial intelligence, or RAI, the memorandum sets forth foundational tenets for implementation across the department including a governance structure and processes to provide oversight and accountability; warfighter trust to ensure fidelity in the AI capability and its use, a systems engineering and risk management approach to implementation in the AI product and acquisition lifecycle; a robust ecosystem to ensure collaboration across government, academia, industry, and allies and build an AI-ready workforce. The memorandum also spelled out how the Joint Artificial Intelligence Center will serve as the lead to coordinate the implementation and oversight of the department's RAI efforts.
As the U.S. Department of Defense (DoD) seeks to increase funding for artificial intelligence (AI) technologies for defense and national security purposes, a new policy memorandum directs the DoD to take steps to ensure that AI is designed, developed, and deployed in a responsible manner. In a May 26, 2021, memorandum titled "Implementing Responsible Artificial Intelligence in the Department of Defense," Deputy Secretary of Defense Kathleen Hicks calls for the incorporation of responsible AI principles into the DoD's AI requirements and acquisition processes. Ms. Hicks wrote: "As the DoD embraces [AI], it is imperative that we adopt responsible behavior, processes, and outcomes in a manner that reflects the Department's commitment to its ethical principles, including the protection of privacy and civil liberties." The memorandum outlines six "foundational tenets" for the DoD to implement "Responsible AI" across the DoD. It also reaffirms the DoD's AI Ethical Principles and confirms that they apply to all DoD AI capabilities of any scale, including AI-enabled autonomous systems.
The Joint Artificial Intelligence Center will lead implementation of responsible AI across the Defense Department, according to a new directive. In a departmentwide memo signed last week, Deputy Defense Secretary Kathleen Hicks enumerated foundational tenets for responsible AI, reaffirmed the ethical AI principles the department adopted last year, and mandated the JAIC director start work on four activities for developing a responsible AI ecosystem. "As the DoD embraces artificial intelligence (AI), it is imperative that we adopt responsible behavior, processes, and outcomes in a manner that reflects the Department's commitment to its ethical principles, including the protection of privacy and civil liberties," Hicks said in the memo, which was announced June 1. "A trusted ecosystem not only enhances our military capabilities, but also builds confidence with end-users, warfighters, and the American public." Hicks assigned the JAIC director to coordinate responsible AI through a working council, which must in turn hammer out a strategy and implementation pathway, create a talent management framework, and report on how responsible AI can be integrated into acquisitions.
As the Pentagon rapidly builds and adopts artificial intelligence tools, Deputy Defense Secretary Kathleen Hicks said military leaders increasingly are worried about a second-hand problem: AI safety. AI safety broadly refers to making sure that artificial intelligence programs don't wind up causing problems, no matter whether they were based on corrupted or incomplete data, were poorly designed, or were hacked by attackers. AI safety is often seen as an afterthought as companies rush to build, sell, and adopt machine learning tools. But the Department of Defense is obligated to put a little more attention into the issue, Hicks said Monday at the Defense One Tech Summit. "As you look at testing evaluation and validation and verification approaches, these are areas where we know--whether you're in the commercial sector, the government sector, and certainly if you look abroad, there is not a lot happening in terms of safety," she said.
The Department of Defense adopted its Ethical Principles for Artificial Intelligence in February 2020, a first for any military organization. These principles build on the foundational work performed by the Defense Innovation Board and is tied directly to one of the pillars of the DoD AI Strategy: Leading in military ethics and safety. The Joint Artificial Intelligence Center serves as the Department's lead for coordinating the oversight and implementation of these principles. Alka Patel, head of AI Ethics Policy for the JAIC, focuses on how to operationalize the five DoD AI Ethics Principles (Responsible, Equitable, Traceable, Reliable and Governable) and put them into practice in the design, development, deployment, and use of AI-enabled capabilities. However, to operationalize these principles throughout the DoD, the JAIC is turning to Responsible AI – an enterprise-wide framework that provides the DoD workforce and the American public the confidence that DoD AI-enabled systems will be safe and reliable, and will adhere to ethical standards.