Goto

Collaborating Authors

Explanation & Argumentation


Human-Robot Relationship Headed Towards Building Trust

#artificialintelligence

On a normal day, humans encounter Artificial Intelligence (AI) numerous times. Artificial intelligence has become a part of human's routine in many ways. They enter our lives in the form of smartphones, appliances in our homes and technology in our cars. Since humans are at the verge of accepting robotics into society, the question that lingers in everyone's mind is'Can robots be trusted?' Human has a mythical illustration of robots turning aggressive once they are provided with all the features of humans. We are pushed to such conclusions through movies, soap operas and dramas.


5 reasons why you need explainable AI

#artificialintelligence

The scariest thing about Artificial Intelligence is that we never know who the teacher is! If you're working on a Tech Startup, AI and Machine Learning are likely parts of your roadmap (and if it's not, then it should be). Artificial Intelligence (AI) is all around us. AI is there when you search for something on the Internet. AI helps us filter spam emails.


Explainable 'AI' using Gradient Boosted randomized networks Pt2 (the Lasso)

#artificialintelligence

This post is about LSBoost, an Explainable'AI' algorithm which uses Gradient Boosted randomized networks for pattern recognition. I've already presented some promising examples of use of LSBoost based on Ridge Regression weak learners. In mlsauce's version 0.7.1, the Lasso can also be used as an alternative ingredient to the weak learners. Here is a comparison of the regression coefficients obtained by using mlsauce's implementation of Ridge regression and the Lasso: The following example is about training set error vs testing set error, as a function of the regularization parameter, both for Ridge regression and Lasso-based weak learners.


Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks

#artificialintelligence

Human collaborators can effectively communicate with their partners to finish a common task by inferring each other's mental states (e.g., goals, beliefs, and desires). Such mind-aware communication minimizes the discrepancy among collaborators' mental states, and is crucial to the success in human ad-hoc teaming. We believe that robots collaborating with human users should demonstrate similar pedagogic behavior. Thus, in this paper, we propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations, where the robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications based on its online Bayesian inference of the user's mental state. To evaluate our framework, we conduct a user study on a real-time human-robot cooking task.


PhD in Safe and explainable AI

#artificialintelligence

Prospective candidates are expected to have strong (distinction) Masters in computer science, mathematics, statistics or related disciplines. During their PhD journey, students will have an opportunity to undertake a variety of training activities including interdisciplinary training on responsible research, public engagement, developing an entrepreneurial mindset, in addition to regular seminars and workshops held at Warwick. We encourage applications from candidates with non-standard backgrounds (e.g.


VisxAI Job Post Details – Trust in Human-Machine Partnership (THuMP)

#artificialintelligence

THuMP is a multi-disciplinary project, with the ambitious goal of advancing the state-of-the-art in trustworthy human-AI decision-support systems. ThUMP will address the technical challenges involved in creating explainable AI (XAI) systems, with a focus on Visualization for Explainable Planning and Argumentation, so that people using the system can better understand the rationale behind and trust suggestions made by an AI system. This project is conducted in collaboration with three project partners: Schlumberger and Save the Children, which provide use cases for the project, and a law firm whowill cooperate in considering legal implications of enhancing machines with transparency and the ability to explain. The candidate will be responsible for conducting research around the interfaces required to support explainability in the context of decision making in human-machine partnerships. Tasks will involve designing new visual layouts, building the interaction infrastructure for the project, developing a prototype interface for communicating with users, designing and conducting experiments with human subjects based on the use cases that will be co-created with the project partners.


Explaining artificial intelligence in human-centred terms – Martin Schüßler

#artificialintelligence

Since AI involves interactions between machines and humans--rather than just the former replacing the latter--'explainable AI' is a new challenge. Intelligent systems, based on machine learning, are penetrating many aspects of our society. They span a large variety of applications--from the seemingly harmless automation of micro-tasks, such as the suggestion of synonymous phrases in text editors, to more contestable uses, such as in jail-or-release decisions, anticipating child-services interventions, predictive policing and many others. Researchers have shown that for some tasks, such as lung-cancer screening, intelligent systems are capable of outperforming humans. In many other cases, however, they have not lived up to exaggerated expectations.


The case for self-explainable AI

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Would you trust an artificial intelligence algorithm that works eerily well, making accurate decisions 99.9 percent of the time, but is a mysterious black box? Every system fails every now and then, and when it does, we want explanations, especially when human lives are at stake. And a system that can't be explained can't be trusted. That is one of the problems the AI community faces as their creations become smarter and more capable of tackling complicated and critical tasks.


Where explainable AI will be crucial in industry - TechHQ

#artificialintelligence

As artificial intelligence (AI) matures and new applications boom amid a transition to Industry 4.0, we are beginning to accept that machines can help us make decisions more effectively and efficiently. But, at present, we don't always have a clear insight into how or why a model made those decisions – this is'blackbox AI'. In light of alleged bias in AI models in applications across recruitment, loan decisions, and healthcare applications, the ability to effectively explain the workings of decisions made by AI model has become imperative for the technology's further development and adoption. In December last year, the UK's Information Commissioner's Office (ICO) began moving to ensure businesses and other organizations are required to explain decisions made by AI by law, or face multimillion-dollar fines if unable. Explainable AI is the concept of being able to describe the procedures, services, and outcomes delivered or assisted by AI when that information is required, such as in the case of accusations of bias.


Causability and Explainability of Artificial Intelligence in Medicine - PubMed

#artificialintelligence

Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. We argue that there is a need to go beyond explainable AI.