It's both AI technology and ethics that will enable JADC2 - Breaking Defense

#artificialintelligence 

Questions that loom large for the wider application of artificial intelligence (AI) in Defense Department operations often center on trust. How does the operator know if the AI is wrong, that it made a mistake, that it didn't behave as intended? Answers to questions like that come from a technical discipline known as Responsible AI (RAI). It's the subject of a report issued by the Defense Innovation Unit (DIU) in mid-November called Responsible AI Guidelines in Practice, which addresses a requirement in the FY21 National Defense Authorization Act (NDAA) to ensure that the DoD has "the ability, requisite resourcing, and sufficient expertise to ensure that any artificial intelligence technology…is ethically and reasonably developed." DIU's RAI guidelines provide a framework for AI companies, DOD stakeholders and program managers that can help to ensure that AI programs are built with the principles of fairness, accountability, and transparency at each step in the development cycle of an AI system, according to Jared Dunnmon, technical director of the artificial intelligence/machine learning portfolio at DIU.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found