Plotting

Explanation & Argumentation


Unleashing the power of machine learning models in banking through explainable artificial intelligence (XAI)

#artificialintelligence

The "black-box" conundrum is one of the biggest roadblocks preventing banks from executing their artificial intelligence (AI) strategies. It's easy to see why: Picture a large bank known for its technology prowess designing a new neural network model that predicts creditworthiness among the underserved community more accurately than any other algorithm in the marketplace. This model processes dozens of variables as inputs, including never-before-used alternative data. The developers are thrilled, senior management is happy that they can expand their services to the underserved market, and business executives believe they now have a competitive differentiator. But there is one pesky problem: The developers who built the model cannot explain how it arrives at the credit outcomes, let alone identify which factors had the biggest influence on them.


Anthropic's quest for better, more explainable AI attracts $580M – TechCrunch

#artificialintelligence

Less than a year ago, Anthropic was founded by former OpenAI VP of research Dario Amodei, intending to perform research in the public interest on making AI more reliable and explainable. Its $124 million in funding was surprising then, but nothing could have prepared us for the company raising $580 million less than a year later. "With this fundraise, we're going to explore the predictable scaling properties of machine learning systems, while closely examining the unpredictable ways in which capabilities and safety issues can emerge at-scale," said Amodei in the announcement. His sister Daniela, with whom he co-founded the public benefit corporation, said that having built out the company, "We're focusing on ensuring Anthropic has the culture and governance to continue to responsibly explore and develop safe AI systems as we scale." Because that's the problem category Anthropic was formed to examine: how to better understand the AI models increasingly in use in every industry as they grow beyond our ability to explain their logic and outcomes.


A Parable Of Explainability

#artificialintelligence

Ok, sure, machine learning is great for making predictions; But you can't use it to replace scientific theory. Not only will it fail to reach generalizable conclusions, but the result is going to lack elegance and explainability. We won't be able to understand it or build upon it! What makes an algorithm or theory or agent explainable? It's certainly not the ability o "look inside"; We're rather happy assuming that block boxes, such as brains, are capable of explaining their conclusions and theories. We scoff at the idea that perfectly transparent neural networks are "explainable" in a meaningful sense; So it's not visibility that makes something explainable.


Sen. Collins questions Garland on 'conflicting positions' on COVID mask mandates, southern border

FOX News

NSA Border Security Committee Chair Sheriff Mark Dannels on the border crisis and potential for Title 42 to be extended. Sen. Susan Collins, R-Maine, on Tuesday pressed Attorney General Merrick Garland on "conflicting positions" the Biden administration has held on a number of issues, including COVID-19 mandates and border security. During a Senate Appropriations Committee hearing, Collins asked Garland how the administration could justify its conclusion that the pandemic had "subsided enough to warrant the termination of Title 42" while at the same time arguing that "the public health consequences are dire enough to warrant compelled mask usage by Americans on public transportation." Sen. Susan Collins, R-Maine, questions Attorney General Merrick Garland during a Senate Appropriations Subcommittee on Commerce, Justice, Science, and Related Agencies hearing to discuss the fiscal year 2023 budget of the Department of Justice at the Capitol in Washington, DC, on April 26, 2022. In his response to Collins, Garland insisted that the role of the Justice Department is "not to make judgments about the public health and really not to make judgments about policy," but to instead "make determinations of whether the programs and requests of the agencies that are responsible for those are lawful."


Trainee teachers made sharper assessments about learning difficulties after receiving feedback from AI

AIHub

A trial in which trainee teachers who were being taught to identify pupils with potential learning difficulties had their work'marked' by artificial intelligence has found the approach significantly improved their reasoning. It suggests that artificial intelligence (AI) could enhance teachers' "diagnostic reasoning": the ability to collect and assess evidence about a pupil, and draw appropriate conclusions so they can be given tailored support. During the trial, trainees were asked to assess six fictionalised "simulated" pupils with potential learning difficulties. They were given examples of their schoolwork, as well as other information such as behaviour records and transcriptions of conversations with parents. They then had to decide whether or not each pupil had learning difficulties such as dyslexia or Attention Deficit Hyperactivity Disorder (ADHD), and explain their reasoning.


YouTuber deliberately crashed his own plane for views, US aviation agency says

The Guardian

The US Federal Aviation Administration has revoked a YouTuber's pilot license after it concluded that he intentionally crashed his plane for the sake of gaining online views. On 24 November 2021, Trevor Jacob was flying over California's Los Padres national forest in his small turboprop plane when his propeller stopped working. "I'm over the mountains and I … have an engine out," Jacob said into his camera while sitting in the cockpit. He then proceeded to jump out of the plane, filming himself using a selfie stick before landing with his parachute into an open field. Jacob filmed the whole incident and uploaded it to YouTube in a video titled I Crashed My Plane.


A Logic-Based Explanation Generation Framework for Classical and Hybrid Planning Problems

Journal of Artificial Intelligence Research

In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent’s model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent’s plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logic-based framework for explanation generation, where given a knowledge base KBa (of an agent) and a knowledge base KBh (of a human user), each encoding their knowledge of a planning problem, and that KBa entails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ε ⊆ KBa such that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication.


Explainable AI: Language Models

#artificialintelligence

Just like a coin, explainability in AI has two faces -- one it shows to the developers (who actually build the models) and the other to the users (the end customers). The former face (IE i.e. intrinsic explainability) is a technical indicator to the builder that explains the working of the model. Whereas the latter (EE i.e. extrinsic explainability) is proof to the customers about the model's predictions. While IE is required for any reasonable model improvement, we need EE for factual confirmation. A simple layman who ends up using the model's prediction needs to know why is the model suggesting something.


Federal judge blocks mask mandate for public transportation

FOX News

Gutfeld and guests discuss how the Biden administration is extending the federal mask mandate for another 15 days on'Gutfeld!' A federal judge on Monday voided the Biden administration's mask mandate for travelers using public transportation such as trains and airplanes. The mandate from the Centers for Disease Control and Prevention applies to people as young as 2 years old, and had been set to expire a number of times but was recently extended to May 3 before Monday's ruling. US COLLEGES REINSTATE MASK REQUIREMENTS, BUT EXPERT SAYS'THE TIME FOR MASK MANDATES IS GONE' The ruling from U.S.. District Court Judge Kathryn Kimball Mizelle, came in a case brought in Florida federal court by Health Freedom Defense Fund, Inc. and frequent air travelers Ana Daza and Sarah Pope against the administration. Judge Mizelle determined that the mandate violated the Administrative Procedure Act by being outside the scope of the CDC's authority, was "arbitrary" and "capricious" and not going through the required notice and comment period for federal rulemaking.


Government Deep Tech 2022 Top Funding Focus Explainable AI, Photonics, Quantum

#artificialintelligence

DARPA, In-Q-Tel, US National Laboratories (examples: Argonne, Oak Ridge) are famous government funding agencies for deep tech on the forward boundaries, the near impossible, that have globally transformative solutions. The Internet is a prime example where more than 70% of the 7.8 billion population are online in 2022, closing in on 7 hours daily mobile usage, and global wealth of $500 Trillion is powered by the Internet. There is convergence between the early bets led by government funding agencies and the largest corporations and their investments. An example is from 2015, where I was invited to help the top 100 CEOs, representing nearly $100 Trillion in assets under management, to look ten years into the future for their investments. The resulting working groups, and private summits resulted in the member companies investing in all the areas identified: quantum computing, block chain, cybersecurity, big data, privacy and data, AI/ML, future in fintech, financial inclusion, ...