Explanation & Argumentation


The problem with 'explainable AI'

#artificialintelligence

The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Companies should disclose where and how they got the data they used to fuel their AI systems' decisions. Consumers should own their data and should be privy to the myriad ways that businesses use and sell such information, which is often done without clear and conscious consumer consent. Because data is the foundation for all AI, it is valid to want to know where the data comes from and how it might explain biases and counterintuitive decisions that AI systems make. On the algorithmic side, grandstanding by IBM and other tech giants around the idea of "explainable AI" is nothing but virtue signaling that has no basis in reality.


The problem with 'explainable AI'

#artificialintelligence

The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Companies should disclose where and how they got the data they used to fuel their AI systems' decisions. Consumers should own their data and should be privy to the myriad ways that businesses use and sell such information, which is often done without clear and conscious consumer consent. Because data is the foundation for all AI, it is valid to want to know where the data comes from and how it might explain biases and counterintuitive decisions that AI systems make. On the algorithmic side, grandstanding by IBM and other tech giants around the idea of "explainable AI" is nothing but virtue signaling that has no basis in reality.


The path to explainable AI

#artificialintelligence

Artificial intelligence (AI) shifts the computing paradigm from rule-based programming to an outcome-based approach. It allows processes to operate at scale, reducing the number of human processing errors, and inventing new ways of solving problems. AlphaGo inspired Go players to try new strategies after experts had been using the same opening moves for 3,000 years. As adoption increases, AI will enable organizations to unlock the "last mile" that traditional automation could not address. But as more enterprises entrust AI to make decisions on their behalf, governance becomes super critical.


France to Seek Backing for New Mechanism to Assign Blame for Chemical Attacks

U.S. News

Recent use includes the assassination with VX of Kim Jong Nam, half-brother of North Korean leader Kim Jong Un, in Kuala Lumpur airport in February 2017 and the attempted murder of Sergei Skripal, a 66-year-old former Russian double agent, and his daughter with a Novichok nerve agent in March in England.


A Matrix Approach for Weighted Argumentation Frameworks

AAAI Conferences

The assignment of weights to attacks in a classical Argumentation Framework allows to compute semantics by taking into account the different importance of each argument. We represent a Weighted Argumentation Framework by a non-binary matrix, and we characterize the basic extensions (such as w-admissible, w-stable, w-complete) by analysing sub-blocks of this matrix. Also, we show how to reduce the matrix into another one of smaller size, that is equivalent to the original one for the determination of extensions. Furthermore, we provide two algorithms that allow to build incrementally w-grounded and w-preferred extensions starting from a w-admissible extension.


On Looking for Invariant Operators in Argumentation Semantics

AAAI Conferences

We study invariant local expansion operators for admissible sets in Abstract Argumentation Frameworks (AFs). Accordingly, we introduce in the future work section also the invariant local expansion for conflict free sets and we derive a definition of robustness for AFs in terms of the number of times such operators can be applied without producing any change in the chosen semantics.


Dodgers' Andrew Friedman: 'If we had to assign blame at this point, it should be me who is taking that'

Los Angeles Times

The overall team performance will obviously get much better as we click on at least two of those cylinders. When we get some of our guys back in the next week, we're confident our offense is going to perform better. It's incumbent upon us, with our bullpen, to get back to what we were doing last year. We're confident we have the guys down there to perform way better than we have.


The Hunt for Explainable AI

#artificialintelligence

The notion that we should understand how artificial intelligences make decisions is gaining increasing currency. As we face a future in which important decisions affecting the course of our lives may be made by artificial intelligence (AI), the idea that we should understand how AIs make decisions is gaining increasing currency. Which hill to position a 20-year-old soldier on, who gets (or does not get) a home mortgage, which treatment a cancer patient receives … such decisions, and many more, already are being made based on an often unverifiable technology. "The problem is that not all AI approaches are created equal," says Jeff Nicholson, a vice president at Pega Systems Inc., makers of AI-based Customer Relationship Management (CRM) software. "Certain'black box' approaches to AI are opaque and simply cannot be explained."


A General Account of Argumentation with Preferences

arXiv.org Artificial Intelligence

This paper builds on the recent ASPIC+ formalism, to develop a general framework for argumentation with preferences. We motivate a revised definition of conflict free sets of arguments, adapt ASPIC+ to accommodate a broader range of instantiating logics, and show that under some assumptions, the resulting framework satisfies key properties and rationality postulates. We then show that the generalised framework accommodates Tarskian logic instantiations extended with preferences, and then study instantiations of the framework by classical logic approaches to argumentation. We conclude by arguing that ASPIC+'s modelling of defeasible inference rules further testifies to the generality of the framework, and then examine and counter recent critiques of Dung's framework and its extensions to accommodate preferences.


Developing a Dataset for Personal Attacks and Other Indicators of Biases

AAAI Conferences

Online argumentation, particularly on popular public discussion boards and social media, is rich with fallacy-and bias-prone arguments. An artificially intelligent tool capable of identifying potential biases in online argumentation might be able to address this growing problem, but what would it take to develop such a tool? In this paper, we attempt to answer this question by carefully defining both argumentative biases and fallacies, and laying out some guidelines for automated bias detection. After laying out a roadmap and identifying current bottlenecks, we take some initial steps towards relieving these limitations through the creation of a dataset of personal and ad hominem attacks in comments. Our progress in this direction is summarized.