Collaborating Authors

What Is Explainable AI (XAI) and How Will It Improve Digital Marketing?


Can your brand explain how its artificial intelligence (AI) applications work, and why they make the decisions they do? Brand trust is hard to win and easy to lose, and transparent and easily explainable AI applications are a great start towards building customers' trust and enhancing the efficiency and effectiveness of AI apps. This article looks at Explainable AI (XAI), and why it should be a part of your brand's AI strategy. Typical AI apps are often referred to as "black box" AI because whatever occurs within the application is relatively unknown to all but those data scientists, programmers and designers who created it. Individually, even those people may not be able to explain anything outside of their primary domain.

What's Inside the "Black Box" of Machine Learning? - RTInsights


Machine learning can optimize business decisions, but the decision reached by an algorithm often isn't transparent. The list of possibilities is endless. Machine learning applications "can provide customer service, manage logistics, analyze medical records, or even write news stories," a recent report by McKinsey Global Institute explains. The McKinsey report identified 120 potential use cases and interviewed 600 industry experts on the potential impact of machine learning. As machines take on routinized decision-making processes, "the value potential is everywhere, even in industries that have been slow to digitize," the report's authors explain.

Getting big impact from big data


New technology tools are making adoption by the front line much easier, and that's accelerating the organizational adaptation needed to produce results. The world has become excited about big data and advanced analytics not just because the data are big but also because the potential for impact is big. Our colleagues at the McKinsey Global Institute (MGI) caught many people's attention several years ago when they estimated that retailers exploiting data analytics at scale across their organizations could increase their operating margins by more than 60 percent and that the US healthcare sector could reduce costs by 8 percent through data-analytics efficiency and quality improvements.1 1.See the full McKinsey Global Institute report, Big data: The next frontier for innovation, competition, and productivity, May 2011. Unfortunately, achieving the level of impact MGI foresaw has proved difficult. True, there are successful examples of companies such as Amazon and Google, where data analytics is a foundation of the enterprise.2 2. To learn how marketing functions in Google's data-driven culture, see "How Google breaks through," February 2015.

Visual Analytics for Explainable Deep Learning Machine Learning

Recently, deep learning has been advancing the state of the art in artificial intelligence to a new level, and humans rely on artificial intelligence techniques more than ever. However, even with such unprecedented advancements, the lack of explanation regarding the decisions made by deep learning models and absence of control over their internal processes act as major drawbacks in critical decision-making processes, such as precision medicine and law enforcement. In response, efforts are being made to make deep learning interpretable and controllable by humans. In this paper, we review visual analytics, information visualization, and machine learning perspectives relevant to this aim, and discuss potential challenges and future research directions.

Where explainable AI will be crucial in industry - TechHQ


As artificial intelligence (AI) matures and new applications boom amid a transition to Industry 4.0, we are beginning to accept that machines can help us make decisions more effectively and efficiently. But, at present, we don't always have a clear insight into how or why a model made those decisions – this is'blackbox AI'. In light of alleged bias in AI models in applications across recruitment, loan decisions, and healthcare applications, the ability to effectively explain the workings of decisions made by AI model has become imperative for the technology's further development and adoption. In December last year, the UK's Information Commissioner's Office (ICO) began moving to ensure businesses and other organizations are required to explain decisions made by AI by law, or face multimillion-dollar fines if unable. Explainable AI is the concept of being able to describe the procedures, services, and outcomes delivered or assisted by AI when that information is required, such as in the case of accusations of bias.