Goto

Collaborating Authors

 prevent ais


RAI's certification process aims to prevent AIs from turning into HALs

Engadget

Between Microsoft's Tay debacle, the controversies surrounding Northpointe's Compas sentencing software, and Facebook's own algorithms helping spread online hate, AI's more egregious public failings over the past few years have shown off the technology's skeevy underbelly -- and just how much work we have to do before they can reliably and equitably interact with humanity. Of course such incidents have done little to tamp down the hype around and interest in artificial intelligences and machine learning systems, and they certainly haven't slowed the technology's march towards ubiquity. Turns out, one of the primary roadblocks to emerge against AI's continued adoption have been the users themselves. We're no longer the same dial-up rubes we were in the baud rate era. An entire generation has already grown to adulthood without ever knowing the horror of an offline world.


Verbal common sense will prevent AIs from harvesting kings and unlocking oranges

#artificialintelligence

If I hand you an apple, you know from experience that it isn't something you can drive. And the tree it came from can't be woven, or its seeds bandied. You know that's nonsense, but AIs don't have the benefit of years spent navigating the world, and as such have no real idea as to what you can do with what -- but a little common sense could be coming their way. Researchers at Brigham Young want to make sure that future androids and AI entities that interact with the real world have at least a basic understanding of what certain things are and do. "When machine learning researchers turn robots or artificially intelligent agents loose in unstructured environments, they try all kinds of crazy stuff," said Ben Murdoch, co-author of the BYU study, in a news release.


Google researchers aim to prevent AIs from discriminating

#artificialintelligence

These elementary AIs only know what we tell them, and if that data carries a bias of any kind, so too will the system trained on it. Google is looking to avoid such awkward and potentially serious situations systematically with a method it calls "Equality of Opportunity." Machine learning systems are basically prediction engines that learn the characteristics of various sets of data and then, given a new bit of data, assign it to one of several buckets: an image recognition system might learn the difference between different types of cars, assigning each picture a label like "sedan," "pickup truck," "bus," etc. The consequences of that particular mistake are likely to be trivial, but what if the computer is sorting through people instead of cars, and categorizing them for risk of default on a home loan? People who fall outside the common parameters are disproportionately likely to fall afoul of what the system learns are good bets from the rest of the data set -- that's just how machine learning operates. "When group membership coincides with a sensitive attribute, such as race, gender, disability, or religion, this situation can lead to unjust or prejudicial outcomes," wrote Google Brain's Moritz Hardt in a blog post.


Google researchers aim to prevent AIs from discriminating

#artificialintelligence

These elementary AIs only know what we tell them, and if that data carries a bias of any kind, so too will the system trained on it. Google is looking to avoid such awkward and potentially serious situations systematically with a method it calls "Equality of Opportunity." Machine learning systems are basically prediction engines that learn the characteristics of various sets of data and then, given a new bit of data, assign it to one of several buckets: an image recognition system might learn the difference between different types of cars, assigning each picture a label like "sedan," "pickup truck," "bus," etc. The consequences of that particular mistake are likely to be trivial, but what if the computer is sorting through people instead of cars, and categorizing them for risk of default on a home loan? People who fall outside the common parameters are disproportionately likely to fall afoul of what the system learns are good bets from the rest of the data set -- that's just how machine learning operates. "When group membership coincides with a sensitive attribute, such as race, gender, disability, or religion, this situation can lead to unjust or prejudicial outcomes," wrote Google Brain's Moritz Hardt in a blog post.