JUNE 2, 2021 – From the battlefield to the back office, artificial intelligence has the potential to transform how the Defense Department does business in areas like increasing the speed of decision making, making sense of complex data sets and improving efficiency in back-office operations. Ensuring that AI is developed, procured and used responsibly and ethically is a top priority for the department's top leader. "As the Department of Defense embraces artificial intelligence, it is imperative that we adopt responsible behavior, processes and outcomes in a manner that reflects the department's commitment to its core set of ethical principles," Deputy Secretary of Defense Dr. Kathleen Hicks wrote in a department-wide memorandum released last week. As part of that commitment to responsible artificial intelligence, or RAI, the memorandum sets forth foundational tenets for implementation across the department including a governance structure and processes to provide oversight and accountability; warfighter trust to ensure fidelity in the AI capability and its use, a systems engineering and risk management approach to implementation in the AI product and acquisition lifecycle; a robust ecosystem to ensure collaboration across government, academia, industry, and allies and build an AI-ready workforce. The memorandum also spelled out how the Joint Artificial Intelligence Center will serve as the lead to coordinate the implementation and oversight of the department's RAI efforts.
As the U.S. Department of Defense (DoD) seeks to increase funding for artificial intelligence (AI) technologies for defense and national security purposes, a new policy memorandum directs the DoD to take steps to ensure that AI is designed, developed, and deployed in a responsible manner. In a May 26, 2021, memorandum titled "Implementing Responsible Artificial Intelligence in the Department of Defense," Deputy Secretary of Defense Kathleen Hicks calls for the incorporation of responsible AI principles into the DoD's AI requirements and acquisition processes. Ms. Hicks wrote: "As the DoD embraces [AI], it is imperative that we adopt responsible behavior, processes, and outcomes in a manner that reflects the Department's commitment to its ethical principles, including the protection of privacy and civil liberties." The memorandum outlines six "foundational tenets" for the DoD to implement "Responsible AI" across the DoD. It also reaffirms the DoD's AI Ethical Principles and confirms that they apply to all DoD AI capabilities of any scale, including AI-enabled autonomous systems.
NEW YORK - The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield. The new principles call for people to "exercise appropriate levels of judgment and care" when deploying and using AI systems, such as those that scan aerial imagery to look for targets. Defense Department officials outlined the new approach Monday. "The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order," said Defense Secretary Mark Esper. It follows recommendations made last year by the Defense Innovation Board, a group led by former Google CEO Eric Schmidt.
The Joint Artificial Intelligence Center will lead implementation of responsible AI across the Defense Department, according to a new directive. In a departmentwide memo signed last week, Deputy Defense Secretary Kathleen Hicks enumerated foundational tenets for responsible AI, reaffirmed the ethical AI principles the department adopted last year, and mandated the JAIC director start work on four activities for developing a responsible AI ecosystem. "As the DoD embraces artificial intelligence (AI), it is imperative that we adopt responsible behavior, processes, and outcomes in a manner that reflects the Department's commitment to its ethical principles, including the protection of privacy and civil liberties," Hicks said in the memo, which was announced June 1. "A trusted ecosystem not only enhances our military capabilities, but also builds confidence with end-users, warfighters, and the American public." Hicks assigned the JAIC director to coordinate responsible AI through a working council, which must in turn hammer out a strategy and implementation pathway, create a talent management framework, and report on how responsible AI can be integrated into acquisitions.
Military and defense organizations using transformative technologies such as artificial intelligence and machine learning can realize tremendous gains and help to maintain advantages over increasingly capable adversaries and competitors. It can allow autonomous vehicles to go into terrain deemed too dangerous for humans, provide predictive analytics and maintenance to keep large fleets running smoothly and safely, and help to provide autonomous operations in difficult conditions. As the US Department of Defense (DoD) increasingly adopts AI technology in a wide variety of use cases ranging from back-office functions to battlefield operations, there is a realization that despite the benefits that AI can bring, there is also a risk of unintended consequences that could cause significant harm by using these various technologies. As a result, the DoD takes the topics of topics of ethics, transparency, and ethics policy very seriously. A few years ago, the DoD created the Joint Artificial Intelligence Center, also referred to as the JAIC, to help figure out how to best move forward with this transformative technology.