A legal question for the AI age: Is tricking a robot the same thing as hacking it?

#artificialintelligence 

A team of computer scientists and a lawyer at University of Washington are raising a curious question: Do current US laws cover cutting-edge research that allows people to bend AI to their will? The research, called adversarial machine learning, takes advantage of the way AI looks at the world, tricking the algorithm to make a different decision than it was designed to make. For example, an attacker might trick AI into perceiving a stop sign as a speed limit sign, or poison an automated credit-rating system in order to get a cheaper loan. The issue could affect every tech company using AI today: If this kind of intervention constitutes hacking, are companies now legally required to protect their systems from adversarial machine learning as they do typical hacking? And if this is not hacking under the legal definition, who's responsible if an attacker crashes someone else's car by tricking its AI?

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found