Goto

Collaborating Authors

Explanation & Argumentation: AI-Alerts


Explainable AI can improve hospice care, reduce costs

#artificialintelligence

Hospice is a compassionate approach focusing on quality of life for terminally ill patients and their caregivers, with approximately 1.55 million Medicare beneficiaries enrolled in hospice care for at least one day during 2018 – 17% more than in 2014. However, at least 14% of Medicare beneficiaries enrolled in hospice stayed for more than 180 days, and hospice stays beyond six months can result in substantial excess costs to healthcare organizations under value-based care arrangements. David Klebonis, COO of Palm Beach Accountable Care Organization, has developed highly interpretable machine learning models that, because of the sensitivity of the clinical decision involved, cannot only accurately predict hospice overstays to drive appropriate hospice referrals, but also surface decision criteria that satisfy clinician scrutiny and promote adoption. "Artificial intelligence and machine learning have the potential to use data to predict patients with a high probability of expiring within the next six months, so that physicians can enter into conversations with these patients and their families about the possibility of referral to hospice," he said. Klebonis, who will address the topic this month at HIMSS22, said in Florida about 58% of Medicare decedents were in hospice at the time of death.


Pulling back the curtain on neural networks

AIHub

When researchers at Oregon State University created new tools to evaluate the decision-making algorithms of an advanced artificial intelligence system, study participants assigned to use them did, indeed, find flaws in the AI's reasoning. But once investigators instructed participants to use the tools in a more structured and rigorous way, the number of bugs they discovered increased markedly. "That surprised us a bit, and it showed that having good tools for visualizing and interfacing with AI systems is important, but it's only part of the story," said Alan Fern, professor of computer science at Oregon State. Since 2017, Fern has led a team of eight computer scientists funded by a four-year, $7.1 million grant from the Defense Advanced Research Projects Agency to develop explainable artificial intelligence, or XAI -- algorithms through which humans can understand, build trust in, and manage the emerging generation of artificial intelligence systems. Dramatic advancements in the artificial neural networks, or ANNs, at the heart of advanced AI have created a wave of powerful applications for transportation, defense, security, medicine, and other fields.


Reports of the Workshops Held at the 2021 AAAI Conference on Artificial Intelligence

Interactive AI Magazine

The Workshop Program of the Association for the Advancement of Artificial Intelligence's Thirty-Fifth Conference on Artificial Intelligence was held virtually from February 8-9, 2021. There were twenty-six workshops in the program: Affective Content Analysis, AI for Behavior Change, AI for Urban Mobility, Artificial Intelligence Safety, Combating Online Hostile Posts in Regional Languages during Emergency Situations, Commonsense Knowledge Graphs, Content Authoring and Design, Deep Learning on Graphs: Methods and Applications, Designing AI for Telehealth, 9th Dialog System Technology Challenge, Explainable Agency in Artificial Intelligence, Graphs and More Complex Structures for Learning and Reasoning, 5th International Workshop on Health Intelligence, Hybrid Artificial Intelligence, Imagining Post-COVID Education with AI, Knowledge Discovery from Unstructured Data in Financial Services, Learning Network Architecture During Training, Meta-Learning and Co-Hosted Competition, ...


Explainable Artificial Intelligence Thrives in Petroleum Data Analytics

#artificialintelligence

Explaining Traditional Engineering Models It is a well-known fact that models of physical phenomena that are generated through mathematical equations can be explained. This is one of the main reasons behind the expectation of engineers and scientists that any potential model of the physical phenomena should be explainable. Explainability of the traditional models of physical phenomena is achieved through the solutions of the mathematical equations that are used to build the models. Explanations of such models are achieved through analytical solutions (for reasonably simple mathematical equations) or numerical solutions (for complex mathematical equations) of the mathematical equations. Solutions of the mathematical equations provide the opportunities to get answers to almost any question that might be asked from the model of the physical phenomena. Solutions of the mathematical equations are used to explain why and how certain results are generated by the model. It allows examination and explanation of the influence and effect of all the involved parameters (variables) on one another and on the model's results (output parameters).



Ethics of Artificial Intelligence and Robotics (Stanford Encyclopedia of Philosophy)

#artificialintelligence

The ethics of AI and robotics is often focused on "concerns" of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues. Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some ...


Answering the Question Why: Explainable AI

#artificialintelligence

The statistical branch of Artificial Intelligence has enamored organizations across industries, spurred an immense amount of capital dedicated to its technologies, and entranced numerous media outlets for the past couple of years. All of this attention, however, will ultimately prove unwarranted unless organizations, data scientists, and various vendors can answer one simple question: can they provide Explainable AI? Although the ability to explain the results of Machine Learning models--and produce consistent results from them--has never been easy, a number of emergent techniques have recently appeared to open the proverbial'black box' rendering these models so difficult to explain. One of the most useful involves modeling real-world events with the adaptive schema of knowledge graphs and, via Machine Learning, gleaning whether they're related and how frequently they take place together. When the knowledge graph environment becomes endowed with an additional temporal dimension that organizations can traverse forwards and backwards with dynamic visualizations, they can understand what actually triggered these events, how one affected others, and the critical aspect of causation necessary for Explainable AI.


Russia calls poisoning accusations by Britain 'nonsense'

Los Angeles Times

British Prime Minister Theresa May said Russia's involvement is "highly likely," and she gave the country a deadline of midnight Tuesday to explain its actions in the case. She is reviewing a range of economic and diplomatic measures in retaliation for the assault with what she identified as the military-grade nerve agent Novichok.