Explanation & Argumentation


UK data regulator urges business towards explainable AI - TechHQ

#artificialintelligence

The Information Commissioner's Office (ICO) is putting forward a regulation that businesses and other organizations are required to explain decisions made by artificial intelligence (AI) or face multimillion-dollar fines if unable to. The guidance will provide advice such as how to explain the procedures, services, and outcomes delivered or assisted by AI to affected individuals. The report would detail the documentation of the decision-making process and data used to arrive at a decision. In extreme cases, organizations that fail to comply may face a fine of up to 4 percent of a company's global turnover, under the EU's data protection law. The new guidance is crucial as many firms in the UK are using some form of AI to execute critical business decisions, such as shortlisting and hiring candidates for roles.


Beginner's Guide To Explainable AI: Hands-On Introduction To What-If Tool

#artificialintelligence

Explainable AI or shortly XAI is a domain that deals with maintaining transparency to the decision making capability of complex machine learning models and algorithms. In this article, we will take a look at such a tool that is built for the purpose of making AI explainable. A simple way to understand this concept is to compare the decision-making process of humans with that of the machines. How do we humans come to a decision? We often make decisions whether they are small insignificant decisions like what outfit to wear for an event, to highly complex decisions that involve risks such as investments or loan approvals.


Explain yourself, mister: Fresh efforts at Google to understand why an AI system says yes or no

#artificialintelligence

Google has announced a new Explainable AI feature for its cloud platform, which provides more information about the features that cause an AI prediction to come up with its results. Artificial neural networks, which are used by many of today's machine learning and AI systems, are modelled to some extent on biological brains. One of the challenges with these systems is that as they have become larger and more complex, it has also become harder to see the exact reasons for specific predictions. Google's white paper on the subject refers to "loss of debuggability and transparency". The uncertainty this introduces has serious consequences.





Google tackles the black box problem with Explainable AI

#artificialintelligence

There is a problem with artificial intelligence. It can be amazing at churning through gigantic amounts of data to solve challenges that humans struggle with. But understanding how it makes its decisions is often very difficult to do, if not impossible. That means when an AI model works it is not as easy as it should be to make further refinements, and when it exhibits odd behaviour it can be hard to fix. But at an event in London this week, Google's cloud computing division pitched a new facility that it hopes will give it the edge on Microsoft and Amazon, which dominate the sector.


Full Professor in Explainable Artificial Intelligence

#artificialintelligence

We are the Department of Data Science and Knowledge Engineering (DKE) at Maastricht University, the Netherlands: an international community of 50 researchers at various stages of their career, embedded in the Faculty of Science and Engineering (FSE). Our department has nearly 30 years' experience with research and teaching in the fields of Artificial Intelligence, Computer Science and Mathematics, and we do so in a highly collaborative and cross-disciplinary manner. To strengthen our team, we are looking for a full professor who will work on AI systems that are able to explain the decisions and actions they recommend or take in a human-understandable way. Our department is growing rapidly. This position is one of multiple job openings: you are more than welcome to browse through our other vacancies.



Google's Explainable AI service sheds light on how machine learning models make decisions - SiliconANGLE

#artificialintelligence

Google LLC has introduced a new "Explainable AI" service to its cloud platform aimed at making the process by which machine learning models come to their decisions more transparent. The idea is that this will help build greater trust in those models, Google said. That's important because most existing models tend to be rather opaque. It's just not clear how they reach their decisions. Tracy Frey, director of strategy for Google Cloud AI, explained in a blog post today that Explainable AI is intended to improve the interpretability of machine learning models.