People shouldn't pay such a high price for calling out AI harms

MIT Technology Review 

The G7 has just agreed a (voluntary) code of conduct that AI companies should abide by, as governments seek to minimize the harms and risks created by AI systems. And later this week, the UK will be full of AI movers and shakers attending the government's AI Safety Summit, an effort to come up with global rules on AI safety. In all, these events suggest that the narrative pushed by Silicon Valley about the "existential risk" posed by AI seems to be increasingly dominant in public discourse. This is concerning, because focusing on fixing hypothetical harms that may emerge in the future takes attention from the very real harms AI is causing today. "Existing AI systems that cause demonstrated harms are more dangerous than hypothetical'sentient' AI systems because they are real," writes Joy Buolamwini, a renowned AI researcher and activist, in her new memoir Unmasking AI: My Mission to Protect What Is Human in a World of Machines.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found