NIST seeks input on guidance to pin down trustworthy AI

#artificialintelligence 

The National Institute of Standards and Technology is seeking public input on what to include in forthcoming guidance that will set rules of the road for fielding trustworthy artificial intelligence in and out of government. NIST, following the recommendations of the National Security Commission on AI, is working on an AI Risk Management Framework that will set voluntary standards for agencies and industries to consider when adopting AI solutions. NIST, in a request for information posted Wednesday, said the upcoming framework will define trustworthy AI in terms of transparency, fairness and accountability. The agency plans to release the framework as a "living document" that adapts to changes in technology and practices. "Defining trustworthiness in meaningful, actionable, and testable ways remains a work in progress," the agency wrote in its RFI.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found