Goto

Collaborating Authors

Explanation & Argumentation


Australian basketball team opts out of pride jersey after 'barrage of abuse and harmful commentary'

FOX News

Fox News Flash top sports headlines are here. Check out what's clicking on Foxnews.com. An Australian professional basketball team decided not to wear a jersey in support of the LGBTQ community on Friday night after the organization said it led to abuse and players being targeted. Cairns Taipans, of the National Basketball League, released a statement on the issue Wednesday, explaining the reasoning behind their decision. The organization made clear they support the LGBTQ community.


Counterfactual explanations for land cover mapping: interview with Cassio Dantas

AIHub

In their paper Counterfactual Explanations for Land Cover Mapping in a Multi-class Setting, Cassio Dantas, Diego Marcos and Dino Ienco apply counterfactual explanations to remote sensing time series data for land-cover mapping classification. In this interview, Cassio tell us more about explainable AI and counterfactuals, the team's research methodology, and their main findings. Our paper falls into the growing topic of explainable artificial intelligence (XAI). Despite the performances achieved by recent deep learning approaches, they remain black-box models with limited understanding of their internal behavior. To improve general acceptability and trustworthiness of such models, there is a growing need to improve their interpretability and make their decision-making processes more transparent.


Council Post: Explainable AI: The Importance Of Adding Interpretability Into Machine Learning

#artificialintelligence

AI is fast becoming embedded in industries, economies and lives, making decisions, recommendations and predictions. These trends mean it's business-critical to understand how AI-enabled systems arrive at specific outputs. It's not enough for an AI algorithm to generate the right result--knowing "the reason why" is now a business fundamental. The process has to be transparent, trustworthy and compliant--far removed from the opaque "black-box" concept that has characterized some AI advances in recent times. At the same time, these advances should not be stifled.


Interview with Katharina Weitz and Chi Tai Dang: Do we need explainable AI in companies?

AIHub

In their project report paper Do We Need Explainable AI in Companies? Investigation of Challenges, Expectations, and Chances from Employees' Perspective, Katharina Weitz, Chi Tai Dang and Elisabeth André investigated employees' specific needs and attitudes towards AI. In this interview, Katharina and Chi Tai tell us more about this work. Our paper examines the current state of AI use in companies. It is particularly important to us to capture the perspective of employees.


4 pillars for using AI responsibly in a skill-based organization – The European Sting - Critical News & Insights on European Politics, Economy, Foreign Affairs, Business & Technology - europeansting.com

#artificialintelligence

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum. Jobs are changing too quickly for traditional workforce models to keep up. This has given rise to a new term, the'skill-based organization.' Studies indicate that 90% of business executives are now experimenting with building a skill-based organization. Moving from jobs to skills is complex.


Why 'Explainable AI' Can Benefit Business - The New Stack

#artificialintelligence

If you've ever gotten a letter from a bank that explained how different financial issues influenced a credit application, you've seen explainable AI at work -- a computer used math and a set of complex formulas to calculate a score and determine whether to approve or deny your application. In making that decision, some data points were either more or less important. Maybe your long history of on-time payments or your low amount of debt contributed to your application's approval. Similarly, explainable AI shows humans how it arrived at a decision by evaluating different inputs in its calculations. While that might sound obscure or only relevant to the most hardcore data people, explainable AI brings significant business advantages that anyone interested in applying AI should consider.


My Husband Always Brags About How "Cheap" His Christmas Gifts to Me Are

Slate

Pay Dirt is Slate's money advice column. Send it to Lillian, Athena, and Elizabeth here. My spouse (29M) and I (31F) have been together for about a decade. We both grew up well below the poverty line, and while we're not rich now, we're quite comfortable financially. Our finances aren't fully merged; we split responsibility for bills, agree to save a certain amount, and otherwise have spending money of our own. But we have very different philosophies towards discretionary spending.


AIhub blog post highlights 2022

AIHub

Over the course of the year, we've had the pleasure of working with many great researchers from across the globe. As 2022 draws to a close, we take a look back at some of our favourite blog posts from our contributors. Olga Vechtomova and Gaurav Sahu envisioned and developed a system, LyricJam Sonic, that uses AI to create a real-time generative stream of music based on an artist's own catalogue of studio recordings. The purpose is to inspire the artist from potentially unexpected combinations of sounds. Christopher Franz and Kevin Schewior write about how they applied a well-known algorithm for solving two-player games to the problem of synthesizing new molecules.


Towards Evidence Retrieval Cost Reduction in Abstract Argumentation Frameworks with Fallible Evidence

Journal of Artificial Intelligence Research

Arguments in argumentation systems cannot always be considered as standalone entities, requiring the consideration of the pieces of evidence they rely on. This evidence might have to be retrieved from external sources such as databases or the web, and each attempt to retrieve a piece of evidence comes with an associated cost. Moreover, a piece of evidence may be available in a given scenario but not in others, and this is not known beforehand. As a result, the collection of active arguments (whose entire set of evidence is available) that can be used by the argumentation machinery of the system may vary from one scenario to another. In this work, we consider an Abstract Argumentation Framework with Fallible Evidence that accounts for these issues, and propose a heuristic measure used as part of the acceptability calculus (specifically, for building pruned dialectical trees) with the aim of minimizing the evidence retrieval cost of the arguments involved in the reasoning process. We provide an algorithmic solution that is empirically tested against two baselines and formally show the correctness of our approach.


Creating more explainable artificial intelligence could enhance human work and creativity – The Varsity The Varsity

#artificialintelligence

Subscribe to our newsletter to receive news and updates directly from The Varsity. We would like to acknowledge that The Varsity's office is built on the traditional territory of several First Nations, including the Huron-Wendat, the Petun First Nations, the Seneca, and most recently, the Mississaugas of the Credit. Journalists have historically harmed Indigenous communities by overlooking their stories, contributing to stereotypes, and telling their stories without their input. Therefore, we make this acknowledgement as a starting point for our responsibility to tell those stories more accurately, critically, and in accordance with the wishes of Indigenous peoples.