Enhancing trust in artificial intelligence: Audits and explanations can help 7wData

#artificialintelligence

There is a lively debate all over the world regarding AI's perceived "black box" problem. Most profoundly, if a machine can be taught to learn itself, how does it explain its conclusions? This issue comes up most frequently in the context of how to address possible algorithmic bias. One way to address this issue is to mandate a right to a human decision per the General Data Protection Regulation's (GDPR) Article 22. Here in the United States, Senators Wyden and Booker propose in the Algorithmic Accountability Act that companies be compelled to conduct impact assessments.


AI, You've Got Some Explaining To Do

#artificialintelligence

Artificial intelligence has the potential to dramatically re-arrange our relationship with technology, hearkening a new era of human productivity, leisure, and wealth. But none of that good stuff is likely to happen unless AI practitioners can deliver on one simple request: Explain to us how the algorithms got their answers. Businesses have never relied more heavily on machine learning algorithms to guide decision-making than they do right now. Buoyed by the rise of deep learning models that can act upon huge masses of data, the benefits of using machine learning algorithms to automate a host of decisions is simply too great to pass up. Indeed, some executives see it as a matter of business survival.


We need to hold algorithms accountable--here's how to do it

#artificialintelligence

Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold themaccountable. Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product managers need new ways to think about, design, and implement algorithmic systems in publicly accountable ways.


We need to hold algorithms accountable--here's how to do it

#artificialintelligence

Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold themaccountable. Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product managers need new ways to think about, design, and implement algorithmic systems in publicly accountable ways.


We need to hold algorithms accountable--here's how to do it.

#artificialintelligence

Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold themaccountable. Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product managers need new ways to think about, design, and implement algorithmic systems in publicly accountable ways.