Goto

Collaborating Authors

Results


5 Reasons Why We Need Explainable Artificial Intelligence

#artificialintelligence

This might be the first time you hear about Explainable Artificial Intelligence, but it is certainly something you should have an opinion about. Explainable AI (XAI) refers to the techniques and methods to build AI applications that humans can understand "why" they make particular decisions. In other words, if we can get explanations from an AI system about its inner logic, this system is considered as an XAI system. Explainability is a new property that started to gain popularity in the AI community, and we will talk about why that happened in recent years. Let's dive into the technical roots of the problem, first.


Atish Ray on LinkedIn: Industrialized ML for Governed, Responsible and Explainable AI - Databricks

#artificialintelligence

Accenture research shows full 84% of C-suite executives believe they must leverage Artificial Intelligence (AI) to achieve their growth objectives. Yet 76% acknowledge they struggle when it comes to scaling it across the business. Having the right framework in place for "Industrializing ML" is a key component of scaling AI in the enterprise. Join us for a glimpse into the world of Industrialized ML as it comes to life at Navy Federal Credit Union using Databricks Unified Analytics Platform.


Explainable AI and Design

#artificialintelligence

The most useful and accurate AI models are also more complex, and the more complex a model is, the more challenging it is to comprehend and trust. Why did it make that prediction? AI is not infallible, and it increasingly operates in an opaque way. This severely limits the adoption of advanced AI models in critical settings. The goal of Explainable AI (XAI) is to develop techniques to help users better understand and trust AI models.


A collection of recommendable papers and articles on Explainable AI (XAI)

#artificialintelligence

Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal rights or regulatory requirements--for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. The technical challenge of explaining AI decisions is sometimes known as the interpretability problem.


Explainable AI: Making Sense of the Black Box

#artificialintelligence

The Black Square is an iconic painting by Russian artist Kazimir Malevich. The first version was done in 1915. The Black Square continues to impress art historians even today, however it did not impress the then Soviet government and was kept in such poor conditions that it suffered significant cracking and decay. Complex machine learning algorithms can be mathematical work of art, but if these black box algorithms fail to impress and build trust with the users, They might be ignored like Malevich's black square. Dramatic success in machine learning has led to a surge of Artificial Intelligence (AI) applications.


Explainable Artificial Intelligence (XAI)

#artificialintelligence

This article was written by Dr. Matt Turek. Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine's current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems.


Assistant Professor in Explainable AI (tenure-track)

#artificialintelligence

We invite applications for a tenure-track position in computer science, focused on explainable artificial intelligence, and ability to collaborate with social sciences. DKE research lines include human-centered aspects of recommender systems, as well as a strong applied mathematics component such as dynamic game theory (differential, evolutionary, spatial and stochastic game theory). The position is supported by the large and growing Explainable and Reliable Artificial Intelligence (ERAI) group of DKE. The group consists of Associate & Assistant Professors, postdoctoral researchers, PhD candidates and master/bachelor students. The ERAI group works together closely on a day-to-day basis, to exchange knowledge, ideas, and research advancements.


The explainability problem - can new approaches pry open the AI black box?

#artificialintelligence

The so-called "black-box" aspect of AI, usually referred to as the explainability problem, or X(AI) for short, arose slowly over the past few years. Still, with the rapid development in AI, it is now considered a significant problem. How can you trust a model if you cannot understand how it reaches its conclusions? For commercial benefits, for ethics concerns or regulatory considerations, X)(AI) is essential if users understand, appropriately trust, and effectively manage AI results. In researching this topic, I was surprised to find almost 400 papers on the subject.


The tensions between explainable AI and good public policy

#artificialintelligence

There are two reasons why. First, with machine learning in general and neural networks or deep learning in particular, there is often a trade-off between performance and explainability. The larger and more complex a model, the harder it will be to understand, even though its performance is generally better. Unfortunately, for complex situations with many interacting influences--which is true of many key areas of policy--machine learning will often be more useful the more of a black box it is. As a result, holding such systems accountable will almost always be a matter of post hoc monitoring and evaluation.


Opening the Black Box with Explainable AI [Hands-on]

#artificialintelligence

Artificial Intelligence is often said to be a "black box" -- an opaque, almost mystical thing that we don't really understand. Throw data into the black box, and out comes a prediction, or so they say. However, much of AI is not opaque, it's just a complex system that "reasons" differently than we (think we) do. For example, kids learn to write by first experimenting with letters, and finding patterns in words. GPT-3 learned to write by training a generative text algorithm on the entire Internet, yielding a model with 175 billion parameters that, essentially, predict how "the Internet" would complete a prompt.