Artificial intelligence - do you know the risks?

#artificialintelligence 

As part of developing an AI Risk Management Framework, the US National Institute for Science and Technology (NIST) has published a draft report identifying three main categories of risk of which those designing, developing and using AI should be aware. This refers to the reliability, accuracy and robustness of the systems being used. Most relevant to AI developers and designers, evaluation criteria should be used to assess accuracy and sources of error. Particular consideration should be exercised when applying AI to new data, and deemed standards of safety and determinations of security must be addressed. In general, assessments of an AI system or a decision deriving from AI ought to be able to be scrutinised by a human.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found