Goto

Collaborating Authors

 ai watchdog


After a teddy bear talked about kink, AI watchdogs are warning parents against smart toys

The Guardian

'Children could become attached to a bot rather than a person or imaginary friend, which could hurt their development.' 'Children could become attached to a bot rather than a person or imaginary friend, which could hurt their development.' Advocates are fighting against the $16.7bn global smart-toy market, decrying surveillance and a lack of regulation As the holiday season looms into view with Black Friday, one category on people's gift lists is causing increasing concern: products with artificial intelligence. The development has raised new concerns about the dangers smart toys could pose to children, as consumer advocacy groups say AI could harm kids' safety and development. The trend has prompted calls for increased testing of such products and governmental oversight.


America's AI watchdog is losing its bite

MIT Technology Review

It found that the security giant Evolv lied about the accuracy of its AI-powered security checkpoints, which are used in stadiums and schools but failed to catch a seven-inch knife that was ultimately used to stab a student. It went after the facial recognition company Intellivision, saying the company made unfounded claims that its tools operated without gender or racial bias. It fined startups promising bogus "AI lawyer" services and one that sold fake product reviews generated with AI. These actions did not result in fines that crippled the companies, but they did stop them from making false statements and offered customers ways to recover their money or get out of contracts. In each case, the FTC found, everyday people had been harmed by AI companies that let their technologies run amok.


AI watchdog needed to regulate automated decision-making, say experts

The Guardian

An artificial intelligence watchdog should be set up to make sure people are not discriminated against by the automated computer systems making important decisions about their lives, say experts. The rise of artificial intelligence (AI) has led to an explosion in the number of algorithms that are used by employers, banks, police forces and others, but the systems can, and do, make bad decisions that seriously impact people's lives. But because technology companies are so secretive about how their algorithms work – to prevent other firms from copying them – they rarely disclose any detailed information about how AIs have made particular decisions. In a new report, Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, a research team at the Alan Turing Institute in London and the University of Oxford, call for a trusted third party body that can investigate AI decisions for people who believe they have been discriminated against. "What we'd like to see is a trusted third party, perhaps a regulatory or supervisory body, that would have the power to scrutinise and audit algorithms, so they could go in and see whether the system is actually transparent and fair," said Wachter.