The False Comfort of Human Oversight as an Antidote to A.I. Harm
In April, the European Commission released a wide-ranging proposed regulation to govern the design, development, and deployment of A.I. systems. The regulation stipulates that "high-risk A.I. systems" (such as facial recognition and algorithms that determine eligibility for public benefits) should be designed to allow for oversight by humans who will be tasked with preventing or minimizing risks. Often expressed as the "human-in-the-loop" solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the "loop" of A.I. seems reassuring, this approach is instead "loopy" in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems. A.I. is celebrated for its superior accuracy, efficiency, and objectivity in comparison to humans.
Jun-15-2021, 09:45:00 GMT
- Country:
- North America > United States (0.16)
- Industry:
- Government (1.00)
- Information Technology > Security & Privacy (0.71)
- Law > Statutes (0.69)
- Technology: