Goto

Collaborating Authors


Explainable AI could reduce the impact of biased algorithms

#artificialintelligence

On May 25, 2018, the General Data Protection Regulation (GDPR) comes into effect across the EU, requiring sweeping changes to how organizations handle personal data. And GDPR standards have real teeth: For most violations, organizations have to pay a penalty of up to €20 million or 4 percent of global revenue, whichever is greater. With the Cambridge Analytica scandal fresh on people's minds, many hope that GDPR will become a model for a new standard of data privacy around the world. We've already heard some industry leaders calling for Facebook to apply GDPR standards to its business in non-EU countries, even though the law doesn't require it. But privacy is only one aspect of the debate around the use of data-driven systems.


To avoid bias, AI needs to 'explain' itself

#artificialintelligence

Can a credit card be sexist? It's not a question most people would have thought about before this week, but on Monday, state regulators in New York announced an investigation into claims of gender discrimination by Apple Card. The algorithms Apple Card used to set credit limits are, it has been reported, inherently biased against women. Tech entrepreneur David Heinemeier Hansson (@DHH) claimed that the card offered him 20 times more credit than his wife, even though she had the better credit score, while Apple's own co-founder Steve Wozniak went to Twitter with a similar story, despite he and his wife sharing bank accounts and assets. Goldman Sachs, the New York bank that backs the Apple Card, released a statement rejecting this assertion, saying that when it comes to assessing credit, they "have not and will not make decisions based on gender."


Does Explainable Artificial Intelligence Improve Human Decision-Making?

arXiv.org Machine Learning

Explainable AI provides insight into the "why" for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. Whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model are open questions. Using real datasets, we compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct versus incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems.


Explainable AI: 4 industries where it will be critical

#artificialintelligence

Let's say that I find it curious how Spotify recommended a Justin Bieber song to me, a 40-something non-Belieber. That doesn't necessarily mean that Spotify's engineers must ensure that their algorithms are transparent and comprehensible to me; I might find the recommendation a tad off-target, but the consequences are decidedly minimal. This is a fundamental litmus test for explainable AI – that is, machine learning algorithms and other artificial intelligence systems that produce outcomes that humans can readily understand and track backwards to the origins. Conversely, relatively low-stakes AI systems might be just fine with the black box model, where we don't understand (and can't readily figure out) the results. "If algorithm results are low-impact enough, like the songs recommended by a music service, society probably doesn't need regulators plumbing the depths of how those recommendations are made," says Dave Costenaro, head of artificial intelligence R&D at Jane.ai.