Coming AI regulation may not protect us from dangerous AI
Offering no criteria by which to define unacceptable risk for AI systems and no method to add new high-risk applications to the Act if such applications are discovered to pose a substantial danger of harm. This is particularly problematic because AI systems are becoming broader in their utility. Only requiring that companies take into account harm to individuals, excluding considerations of indirect and aggregate harms to society. An AI system that has a very small effect on, e.g., each person's voting patterns might in the aggregate have a huge social impact. Permitting virtually no public oversight over the assessment of whether AI meets the Act's requirements.
Feb-5-2023, 03:55:41 GMT
- AI-Alerts:
- 2023 > 2023-02 > AAAI AI-Alert for Feb 7, 2023 (1.00)
- Industry:
- Government (0.40)
- Law (0.40)
- Technology: