Military and defense organizations using transformative technologies such as artificial intelligence and machine learning can realize tremendous gains and help to maintain advantages over increasingly capable adversaries and competitors. It can allow autonomous vehicles to go into terrain deemed too dangerous for humans, provide predictive analytics and maintenance to keep large fleets running smoothly and safely, and help to provide autonomous operations in difficult conditions. As the US Department of Defense (DoD) increasingly adopts AI technology in a wide variety of use cases ranging from back-office functions to battlefield operations, there is a realization that despite the benefits that AI can bring, there is also a risk of unintended consequences that could cause significant harm by using these various technologies. As a result, the DoD takes the topics of topics of ethics, transparency, and ethics policy very seriously. A few years ago, the DoD created the Joint Artificial Intelligence Center, also referred to as the JAIC, to help figure out how to best move forward with this transformative technology.
To practice trustworthy or responsible AI (AI that is truly fair, explainable, accountable, and robust), a number of organizations are creating in-house centers of excellence. These are groups of trustworthy AI stewards from across the business that can understand, anticipate, and mitigate any potential problems. The intent is not to necessarily create subject matter experts but rather a pool of ambassadors who act as point people. Here, I'll walk your through a set of best practices for establishing an effective center of excellence in your own organization. Any larger company should have such a function in place.
It has been six months since the Department of Defense adopted ethical principles for artificial intelligence. Since then, the department's Joint AI Center has faced the daunting challenge of taking that conceptual work and scaling it to develop actionable guidance for the rest of the military. The goal is to give anyone who works in technology development -- from contracting officers to software developers -- a "shared vocabulary" for building ethics into any DOD work involving AI. What's at stake, leaders say, is ensuring that the DOD uses the emerging technology in ways that uphold the department's values while managing potentially huge shifts in the "character" of warfare. The first step is to agree on a document that turns the principles into clear guidance.
Keeping up with artificial intelligence (AI) and data privacy can be overwhelming. While there's loads of promise and opportunity, there are also concerns about data misuse and personal privacy being at risk. As we evaluate these topics and as the Fourth Industrial Revolution unfolds, questions arise about the promise and peril of AI, and how can organizations put steps in place to better realize the value of it. Integrating "ethics" into technology products can feel abstract for engineers and developers. While many technology companies are independently working on initiatives to do this in concrete and tangible ways, it is imperative that we break out of those silos and share best practices.
As the U.S. Department of Defense (DoD) seeks to increase funding for artificial intelligence (AI) technologies for defense and national security purposes, a new policy memorandum directs the DoD to take steps to ensure that AI is designed, developed, and deployed in a responsible manner. In a May 26, 2021, memorandum titled "Implementing Responsible Artificial Intelligence in the Department of Defense," Deputy Secretary of Defense Kathleen Hicks calls for the incorporation of responsible AI principles into the DoD's AI requirements and acquisition processes. Ms. Hicks wrote: "As the DoD embraces [AI], it is imperative that we adopt responsible behavior, processes, and outcomes in a manner that reflects the Department's commitment to its ethical principles, including the protection of privacy and civil liberties." The memorandum outlines six "foundational tenets" for the DoD to implement "Responsible AI" across the DoD. It also reaffirms the DoD's AI Ethical Principles and confirms that they apply to all DoD AI capabilities of any scale, including AI-enabled autonomous systems.