Coming AI regulation may not protect us from dangerous AI

#artificialintelligence 

Offering no criteria by which to define unacceptable risk for AI systems and no method to add new high-risk applications to the Act if such applications are discovered to pose a substantial danger of harm. This is particularly problematic because AI systems are becoming broader in their utility. Only requiring that companies take into account harm to individuals, excluding considerations of indirect and aggregate harms to society. An AI system that has a very small effect on, e.g., each person's voting patterns might in the aggregate have a huge social impact. Permitting virtually no public oversight over the assessment of whether AI meets the Act's requirements.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found