La veille de la cybersécurité

#artificialintelligence 

Experts at the National Institute of Standards and Technology want public and private entities to take a socio-technical approach to implementing artificial intelligence technologies to help mitigate algorithmic biases and other risks to AI systems, as detailed in a new playbook. These recommendations to help organizations navigate the pervasive biases that often accompany AI technologies are slated to come out by the end of the week, Nextgov has learned. The playbook is meant to act as a companion guide to NIST's Risk Management Framework, the final version of which will be submitted to Congress in early 2023. Reva Schwartz, a research scientist and principal AI investigator at NIST, said that the guidelines act as a comprehensive, bespoke guide for public and private organizations to tailor to their internal structure, rather than function as a rigid checklist. "It's meant to help people navigate the framework, and implements practices internally that could be used," Schwartz told Nextgov.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found