Goto

Collaborating Authors

Explainable AI: Why should business leaders care?

#artificialintelligence

Artificial intelligence (AI) has become increasingly pervasive and is experiencing widespread adoption in all industries. Faced with increasing competitive pressures and observing the AI success stories of their peers, more and more organizations are adopting AI in various facets of their business. Machine Learning (ML) models, the key component driving the AI systems, are becoming increasingly powerful, displaying superhuman capabilities on most tasks. However, this increased performance has been accompanied by an increase in model complexity, turning the AI systems into a black box whose decisions can be hard to understand by humans. Employing black box models can have severe ramifications, as the decisions made by the systems not only influence the business outcomes but can also impact many lives.


Explainable AI: Why should business leaders care?

#artificialintelligence

Artificial intelligence (AI) has become increasingly pervasive and is experiencing widespread adoption in all industries. Faced with increasing competitive pressures and observing the AI success stories of their peers, more and more organizations are adopting AI in various facets of their business. Machine Learning (ML) models, the key component driving the AI systems, are becoming increasingly powerful, displaying superhuman capabilities on most tasks. However, this increased performance has been accompanied by an increase in model complexity, turning the AI systems into a black box whose decisions can be hard to understand by humans. Employing black box models can have severe ramifications, as the decisions made by the systems not only influence the business outcomes but can also impact many lives.


Where explainable AI will be crucial in industry - TechHQ

#artificialintelligence

As artificial intelligence (AI) matures and new applications boom amid a transition to Industry 4.0, we are beginning to accept that machines can help us make decisions more effectively and efficiently. But, at present, we don't always have a clear insight into how or why a model made those decisions – this is'blackbox AI'. In light of alleged bias in AI models in applications across recruitment, loan decisions, and healthcare applications, the ability to effectively explain the workings of decisions made by AI model has become imperative for the technology's further development and adoption. In December last year, the UK's Information Commissioner's Office (ICO) began moving to ensure businesses and other organizations are required to explain decisions made by AI by law, or face multimillion-dollar fines if unable. Explainable AI is the concept of being able to describe the procedures, services, and outcomes delivered or assisted by AI when that information is required, such as in the case of accusations of bias.


Explainable AI: 4 industries where it will be critical

#artificialintelligence

Let's say that I find it curious how Spotify recommended a Justin Bieber song to me, a 40-something non-Belieber. That doesn't necessarily mean that Spotify's engineers must ensure that their algorithms are transparent and comprehensible to me; I might find the recommendation a tad off-target, but the consequences are decidedly minimal. This is a fundamental litmus test for explainable AI – that is, machine learning algorithms and other artificial intelligence systems that produce outcomes that humans can readily understand and track backwards to the origins. Conversely, relatively low-stakes AI systems might be just fine with the black box model, where we don't understand (and can't readily figure out) the results. "If algorithm results are low-impact enough, like the songs recommended by a music service, society probably doesn't need regulators plumbing the depths of how those recommendations are made," says Dave Costenaro, head of artificial intelligence R&D at Jane.ai.


Explaining AI: What Do Business Leaders Need to Care?

#artificialintelligence

The use of artificial intelligence (AI) has become more widespread and is experiencing widespread adoption in all industries. With increasing competitive pressures and observations of their peers' AI successes, more and more businesses are integrating AI into various aspects of their operations. The Machine Learning (ML) models that drive AI systems have become increasingly powerful, displaying superhuman abilities on most tasks. Although AI systems have increased in performance, model complexity has increased as well, making them a black box with decisions that may be difficult for humans to understand. As black box models influence not only business outcomes but also the lives of many people, they can have severe ramifications.