Collaborating Authors

The How of Explainable AI: Explainable Modelling


Achieving explainable modelling is sometimes considered synonymous with restricting the choice of AI model to specific family of models that are considered inherently explainable. We will review this family of AI models. However, our discussion goes far beyond the conventional explainable model families and includes more recent and novel approaches such as joint prediction and explanation, hybrid models, and more. Ideally we can avoid the black-box problem from the beginning by developing a model that is explainable by design. The traditional approach to achieve explainable modelling is to adopt from a specific family of models that are considered explainable.

Why is explainable artificial intelligence a must for the enterprise? EM360


Artificial intelligence (AI) is one of the most exciting technologies in the world right now. In particular, it's bringing life to ideas that were once just a figment of Hollywood films. However, it has also created polarised viewpoints. Many AI experts are working towards reaping its full potential, while others worry about creating a Black Mirror-esque reality. Perhaps the best way to meet in the middle is by exploring explainable AI.

Why I agree with Geoff Hinton: I believe that Explainable AI is over-hyped by media


Geoffrey Hinton dismissed the need for explainable AI. A range of experts have explained why he is wrong. I actually tend to agree with Geoff. Explainable AI is overrated and hyped by the media. A whole industry has sprung up with a business model of scaring everyone about AI being not explainable.

Explainable Artificial Intelligence


Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine's current inability to explain their decisions and actions to human users. The Department of Defense is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI--especially explainable machine learning--will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.