Explanation & Argumentation


Artificial Intelligence Has Got Some Explaining to Do

#artificialintelligence

During last Wednesday's congressional hearing about Twitter transparency, Twitter CEO Jack Dorsey was forced to take accountability for the damaging cultural and political effects of his company. Soft-spoken and contrite, Dorsey provided a stark contrast to Facebook's Mark Zuckerberg, who seemed more confident when he appeared before Congress in April. In the months since, collective faith in the fabric of the internet has been anything but restored; instead, consumers, politicians, and the tech companies themselves continue to grapple with the aftermath of what social platforms hath wrought. During the hearing, Representative Debbie Dingell asked Dorsey if Twitter's algorithms are able to learn from the decisions they make--like who they suggest users follow, which tweets rise to the top, and in some cases what gets flagged for violating the platform's terms of service or even who gets banned--and also if Dorsey could explain how all of this works. "Great question," Dorsey responded, seemingly excited at a line of questioning that piqued his intellectual curiosity.


On Looking for Local Expansion Invariants in Argumentation Semantics: a Preliminary Report

arXiv.org Artificial Intelligence

We study invariant local expansion operators for conflict-free and admissible sets in Abstract Argumentation Frameworks (AFs). Such operators are directly applied on AFs, and are invariant with respect to a chosen "semantics" (that is w.r.t. each of the conflict free/admissible set of arguments). Accordingly, we derive a definition of robustness for AFs in terms of the number of times such operators can be applied without producing any change in the chosen semantics.


DARPA pushes for AI that can explain its decisions

Engadget

Companies like to flaunt their use of artificial intelligence to the point where it's virtually meaningless, but the truth is that AI as we know it is still quite dumb. While it can generate useful results, it can't explain why it produced those results in meaningful terms, or adapt to ever-evolving situations. DARPA thinks it can move AI forward, thoug. It's launching an Artificial Intelligence Exploration program that will invest in new AI concepts, including "third wave" AI with contextual adaptation and an ability to explain its decisions in ways that make sense. If it identified a cat, for instance, it could explain that it detected fur, paws and whiskers in a familiar cat shape.


Modular Semantics and Characteristics for Bipolar Weighted Argumentation Graphs

arXiv.org Artificial Intelligence

This paper addresses the semantics of weighted argumentation graphs that are bipolar, i.e. contain both attacks and supports for arguments. We build on previous work by Amgoud, Ben-Naim et. al. We study the various characteristics of acceptability semantics that have been introduced in these works. We provide a simplified and mathematically elegant formulation of these characteristics. The formulation is modular because it cleanly separates aggregation of attacking and supporting arguments (for a given argument a) from the computation of their influence on a's initial weight. We discuss various semantics for bipolar argumentation graphs in the light of these characteristics. Based on the modular framework, we prove general convergence and divergence theorems. We show that all semantics converge for all acyclic graphs and that no sum-based semantics can converge for all graphs. In particular, we show divergence of Euler-based semantics for certain cyclic graphs. We also provide the first semantics for bipolar weighted graphs that converges for all graphs.


Explainable AI and how it is transforming Insurance

#artificialintelligence

Logical Glue is the leading cloud based predictive analytics platform for the insurance sector. We use Explainable Artificial Intelligence (XAI) to deliver actionable insight to insurers, in order to improve profitability. The power of XAI means being able to understand the reasons why decisions have been made in the form of human-interpretable rules, allowing better internal communication between technical and business users, and an improved experience for customers. We partner with insurers, re-insurers and MGAs to predict, automate and improve areas along the entire customer journey such as quote conversion, risk pricing, claims process optimisation, and customer retention. The time to value for these benefits have been realised in a matter of weeks not months, I would be delighted to talk through some of our use cases in more detail, to learn more please contact Naveed Ashraf on naveed.ashraf@logicalglue.com


DARPA's 'explainable A.I.' a common-sense comfort in a machine takeover world

#artificialintelligence

Two-and-a-half years ago, technology wizard and Stanford University Master of Science in Computer Science graduate David Gunningjoined with DARPA, the Defense Advanced Research Projects Agency, to manage a program to develop explainable artificial intelligence. And listen up: The XAI arena, as it's abbreviated, is where we want to head -- this is where technology development ought to focus. XAI is the common-sense older brother in a digitized world filled with flashy, privacy-invading, data-gobbling gadgets and machine-controlling bullies. The goal of XAI, Gunning said, in a recent telephone conversation, is not so much to "take human thinking and put it into machines," as nearly all of today's artificial intelligence seeks to do. Rather, XAI's aim is to equip the machine with the ability to tell its human operators why it arrives at the conclusions it does -- to make the machine explain itself, so to speak.


O.P.C.W., Chemical Weapons Watchdog, Gets Power to Assign Blame

NYT > Middle East

"Are we really committed to prohibiting chemical weapons, or only committed if no one is upset, only committed if no one complains, only committed if no one is put out?" "Without attributing responsibility for their use, our words will become empty rhetoric," Mr. Mason said. "Those that use chemical weapons will more confidently wear a cloak of impunity." The Syrian ambassador, Bassam al-Sabbagh, devoted most of his speech to the group to attacking the United States, which he said had caused the bloodshed in the Middle East, distorted the operation of the O.P.C.W. and other international bodies and conducted "a campaign of false allegations" against his country. He even blamed Americans for the use of chemical weapons in Syria. The United States gives chemical weapons to allied militias, he claimed, to use "in order to inflame international public opinion against the Syrian government."


Watchdog OPCW gets authority to assign blame in Syria chemical attacks despite Russia opposition

The Japan Times

BRUSSELS – Member nations of the global chemical weapons watchdog voted Wednesday to give the organization the authority to apportion blame for illegal attacks, expanding its powers following a bitter dispute pitting Britain and its Western allies against Russia and Syria. An 82-24 vote provided the two-thirds majority needed to enlarge the purview of the Organization for the Prohibition of Chemical Weapons. The organization was created to implement a 1997 treaty that banned chemical weapons, but lacked a mandate to name the parties it found responsible for using them. Many participating nations saw the inability to assign responsibility as a senseless hamstring, especially after fatal chemical attacks during the war in Syria. Russia opposed adding a new license to the agency's portfolio, saying that was a decision that belonged to the United Nations.


Member States Allow Chemical Arms Watchdog to Assign Blame for Attacks

U.S. News

From 2015 to 2017 a joint United Nations-OPCW team had been appointed to assign blame for chemical attacks in Syria. It found that Syrian government troops used nerve agent sarin and chorine barrel bombs on several occasions, while Islamic State militants were found to have used sulfur mustard.


Watchdog Gets Authority to Assign Blame in Chemical Attacks

U.S. News

Britain made its proposal in the wake of the chemical attacks on an ex-spy and his daughter in the English city of Salisbury, as well as in Syria's civil war and attacks by the Islamic State group in Iraq. Britain has accused Russia of using a nerve agent in the attempted assassination in March of former spy Sergei Skripal, which Moscow strongly denies.