Explanation & Argumentation


Chemical arms team to assign blame for Syrian attacks despite Russia, Iran opposition

The Japan Times

THE HAGUE, NETHERLANDS – The global chemical weapons watchdog will in February begin to assign blame for attacks with banned munitions in Syria's war, using new powers approved by member states but opposed by Damascus and its key allies Russia and Iran. The agency was handed the new task in response to an upsurge in the use of chemical weapons in recent years, notably in the Syrian conflict, where scores of attacks with sarin and chlorine have been carried out by Syrian forces and rebel groups, according to a joint United Nations-OPCW investigation. A core team of 10 experts charged with apportioning blame for poison gas attacks in Syria will be hired soon, Fernando Arias, the new head of the Organisation for the Prohibition of Chemical Weapons (OPCW), told the Foreign Press Association of the Netherlands on Tuesday. The Syria team will be able to look into all attacks previously investigated by the OPCW, dating back to 2014. The OPCW was granted additional powers to identify individuals and institutions responsible for attacks by its 193 member states at a special session in June.


Global Bigdata Conference

#artificialintelligence

How much can anyone trust a recommendation from an AI? Yaroslav Kuflinski, from Iflexion gives an explanation of explainable AI She is lying sedated on a gurney that's bumping towards the operating theater. It squeaks to a halt and a hurried member of hospital staff thrusts a form at you to sign. It describes the urgent surgical procedure your child is about to undergo--and it requires your signature if the operation is to go ahead. But here's the rub--at the top of the form in large, bold letters it says "DIAGNOSIS AND SURGICAL PLAN COPYRIGHT ACME ARTIFICIAL INTELLIGENCE COMPANY." At this specific moment, do you think you are owed a reasonable, plain-English explanation of all the inscrutable decisions that an AI has lately been making on your daughter's behalf? in short, do we need explainable AI?


Top 10 2019 Business Intelligence Trends Explainable AI

#artificialintelligence

The promise of artificial intelligence (AI) suggests that machines will augment human understanding by automating decision-making. Josh Parenteau, Director of Market Intelligence at Tableau explained how artificial intelligence and machine learning will act as another perspective, "helping uncover those insights that have gone previously undiscovered." Gartner research indicates that by 2020, "85% of CIOs will be piloting artificial intelligence programs through a combination of buy, build, and outsource efforts." But as organizations become more reliant on machine learning models, how can humans be sure that these recommendations are trustworthy? Many machine learning applications don't currently have a way to "look under the hood" to understand the algorithms or logic behind decisions and recommendations, so organizations piloting AI programs are rightfully concerned about widespread adoption.


Explanation in artificial intelligence: Insights from the social sciences

#artificialintelligence

There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a ‘good’ explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics.


Explainable Artificial Intelligence and Topological Data Analysis

#artificialintelligence

Our recent publication "Algorithmic Canonical Stratifications of Simplicial Complexes" proposes a new algorithm for data analysis that offers a topology-aware path towards explainable artificial intelligence. Despite (or, perhaps, due to) being mathematically rigorous, the text of the original work is virtually impenetrable for readers not familiar with the concepts, tools, and notation of topology. In order to convey our ideas to a wider audience, we present this supplemental introduction. Here, we summarize and explain in plain English the motivation, reasoning, and methods of our new topological data analysis algorithm that we term "canonical stratification". Machine learning has advanced significantly in recent years and has proven itself to be a powerful and versatile tool in a variety of data-driven disciplines.


AI, Explain Yourself

Communications of the ACM

Artificial Intelligence (AI) systems are taking over a vast array of tasks that previously depended on human expertise and judgment. Often, however, the "reasoning" behind their actions is unclear, and can produce surprising errors or reinforce biased processes. One way to address this issue is to make AI "explainable" to humans--for example, designers who can improve it or let users better know when to trust it. Although the best styles of explanation for different purposes are still being studied, they will profoundly shape how future AI is used. Some explainable AI, or XAI, has long been familiar, as part of online recommender systems: book purchasers or movie viewers see suggestions for additional selections described as having certain similar attributes, or being chosen by similar users.


Holy Grail of AI for Enterprise -- Explainable AI

#artificialintelligence

Having deployed about 20 AI Solutions in past 10 years from building Intelligent Audience Measurement System for a Media Company in 2009 to Intelligent Financial Compliance System for large CPG customer in 2018, one skepticism stayed constant throughout with Enterprise customers -- Trustworthy Production Deployment of an AI System. Yes, it is Holi Grail of AI and for the right reason; whether it about losing a High-Value customer due to wrong Churn Prediction or losing dollars due to incorrect classification of a financial transaction. In reality, Customers are the less bothered accuracy of AI model, but their concerns are about Cluelessness of Data Scientist to explain "How do I trust its decision making?" XAI is an emerging branch of AI where AI systems are made to explain the reasoning behind every decision made by them. Following is a simple depiction of the full circle of AI.


AFRA: Argumentation framework with recursive attacks

arXiv.org Artificial Intelligence

The issue of representing attacks to attacks in argumentation is receiving an increasing attention as a useful conceptual modelling tool in several contexts. In this paper we present AFRA, a formalism encompassing unlimited recursive attacks within argumentation frameworks. AFRA satisfies the basic requirements of definition simplicity and rigorous compatibility with Dung's theory of argumentation. This paper provides a complete development of the AFRA formalism complemented by illustrative examples and a detailed comparison with other recursive attack formalizations.


Autonomous cars present new challenges for Explainable AI - Which-50

#artificialintelligence

As society trusts more of its operations to autonomous systems, increasingly companies are making it a requirement that humans can understand how exactly a machine has reached a certain conclusion. The research efforts behind Explainable AI (XAI) is gaining traction as technology giants like Microsoft, Google and IBM, agree that AI should be to explain its decision making. XAI, sometimes called transparent AI, has the backing of the Defense Advanced Research Projects Agency (DARPA) an agency of the US Department of Defense, which is funding a large program develop the state of the art explainable AI techniques and modelling. Dr Brian Ruttenberg was formerly the senior scientist at Charles River Analytics (CRA) in Cambridge, where he was the principal investigator for CRA's effort on DARPA's XAI program. He argues XAI helps to identify bias or errors in algorithms and engenders trust in the technology.


Tech Advances Make It Easier to Assign Blame for Cyberattacks

WSJ.com: WSJD - Technology

"All you have to do is look at the attacks that have taken place recently--WannaCry, NotPetya and others--and see how quickly the industry and government is coming out and assigning responsibility to nation states such as North Korea, Russia and Iran," said Dmitri Alperovitch, chief technology officer at CrowdStrike Inc., a cybersecurity company that has investigated a number of state-sponsored hacks. The White House and other countries took roughly six months to blame North Korea and Russia for the WannaCry and NotPetya attacks, respectively, while it took about three years for U.S. authorities to indict a North Korean hacker for the 2014 attack against Sony . Forensic systems are gathering and analyzing vast amounts of data from digital databases and registries to glean clues about an attacker's infrastructure. These clues, which may include obfuscation techniques and domain names used for hacking, can add up to what amounts to a unique footprint, said Chris Bell, chief executive of Diskin Advanced Technologies, a startup that uses machine learning to attribute cyberattacks. Additionally, the increasing amount of data related to cyberattacks--including virus signatures, the time of day the attack took place, IP addresses and domain names--makes it easier for investigators to track organized hacking groups and draw conclusions about them.