AI watchdog needed to regulate automated decision-making, say experts
An artificial intelligence watchdog should be set up to make sure people are not discriminated against by the automated computer systems making important decisions about their lives, say experts. The rise of artificial intelligence (AI) has led to an explosion in the number of algorithms that are used by employers, banks, police forces and others, but the systems can, and do, make bad decisions that seriously impact people's lives. But because technology companies are so secretive about how their algorithms work – to prevent other firms from copying them – they rarely disclose any detailed information about how AIs have made particular decisions. In a new report, Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, a research team at the Alan Turing Institute in London and the University of Oxford, call for a trusted third party body that can investigate AI decisions for people who believe they have been discriminated against. "What we'd like to see is a trusted third party, perhaps a regulatory or supervisory body, that would have the power to scrutinise and audit algorithms, so they could go in and see whether the system is actually transparent and fair," said Wachter.
Jan-27-2017, 20:45:04 GMT
- Country:
- Europe
- Austria (0.05)
- Germany (0.05)
- United Kingdom > England
- Oxfordshire > Oxford (0.25)
- North America > United States
- California > Los Angeles County
- Los Angeles (0.05)
- District of Columbia > Washington (0.05)
- Maryland (0.05)
- Massachusetts > Middlesex County
- Natick (0.05)
- California > Los Angeles County
- Oceania > New Zealand (0.05)
- Europe
- Industry:
- Government (1.00)
- Information Technology > Security & Privacy (0.93)
- Law (1.00)
- Transportation > Air (0.73)
- Technology: