Goto

Collaborating Authors

Results


Explainable Al (XAI) with Python

#artificialintelligence

Importance of XAI in modern world Differentiation of glass box, white box and black box ML models Categorization of XAI on the basis of their scope, agnosticity, data types and explanation techniques Trade-off between accuracy and interpretability Application of InterpretML package from Microsoft to generate explanations of ML models Need of counterfactual and contrastive explanations Working principles and mathematical modeling of XAI techniques like LIME, SHAP, DiCE, LRP, counterfactual and contrastive explanationss Application of XAI techniques like LIME, SHAP, DiCE, LRP to generate explanations for black-box models for tabular, textual, and image datasets. Application of XAI techniques like LIME, SHAP, DiCE, LRP to generate explanations for black-box models for tabular, textual, and image datasets. This course provides detailed insights into the latest developments in Explainable Artificial Intelligence (XAI). Our reliance on artificial intelligence models is increasing day by day, and it's also becoming equally important to explain how and why AI makes a particular decision. Recent laws have also caused the urgency about explaining and defending the decisions made by AI systems.


Explainable Artificial Intelligence (XAI) with Python

#artificialintelligence

Our reliance on artificial intelligence models is increasing day by day, and it's also becoming equally important to explain how and why AI This course provides detailed insights into the latest developments in Explainable Artificial Intelligence (XAI). Our reliance on artificial intelligence models is increasing day by day, and it's also becoming equally important to explain how and why AI makes a particular decision. Recent laws have also caused the urgency about explaining and defending the decisions made by AI systems. This course discusses tools and techniques using Python to visualize, explain, and build trustworthy AI systems. This course covers the working principle and mathematical modeling of LIME (Local Interpretable Model Agnostic Explanations), SHAP (SHapley Additive exPlanations) for generating local and global explanations.


Explainable AI for B5G/6G: Technical Aspects, Use Cases, and Research Challenges

arXiv.org Artificial Intelligence

When 5G began its commercialisation journey around 2020, the discussion on the vision of 6G also surfaced. Researchers expect 6G to have higher bandwidth, coverage, reliability, energy efficiency, lower latency, and, more importantly, an integrated "human-centric" network system powered by artificial intelligence (AI). Such a 6G network will lead to an excessive number of automated decisions made every second. These decisions can range widely, from network resource allocation to collision avoidance for self-driving cars. However, the risk of losing control over decision-making may increase due to high-speed data-intensive AI decision-making beyond designers and users' comprehension. The promising explainable AI (XAI) methods can mitigate such risks by enhancing the transparency of the black box AI decision-making process. This survey paper highlights the need for XAI towards the upcoming 6G age in every aspect, including 6G technologies (e.g., intelligent radio, zero-touch network management) and 6G use cases (e.g., industry 5.0). Moreover, we summarised the lessons learned from the recent attempts and outlined important research challenges in applying XAI for building 6G systems. This research aligns with goals 9, 11, 16, and 17 of the United Nations Sustainable Development Goals (UN-SDG), promoting innovation and building infrastructure, sustainable and inclusive human settlement, advancing justice and strong institutions, and fostering partnership at the global level.


A Beginner's Guide to Four Principles of Explainable Artificial Intelligence

#artificialintelligence

Artificial Intelligence is creating cutting-edge technologies for more efficient workflow in multiple industries across the world in this tech-driven era. There are machine learning and deep learning algorithms that are too complicated for people to understand besides AI engineers or related employees. Artificial Intelligence has generated self-explaining algorithms for stakeholders and partners to comprehend the entire process of transforming enormous complex sets of real-time data into meaningful in-depth insights. This is known as Explainable Artificial Intelligence or XAI in which the results of these solutions can be easily understood by humans. It helps AI designers to explain how AI machines have generated a specific kind of insight or outcome for businesses to thrive in the market. Multiple online courses and platforms are available for a better understanding of Explainable AI by designing interpretable and inclusive Artificial Intelligence.


7 Free Resources To Learn Explainable AI

#artificialintelligence

Explainable AI (XAI) is key to establishing trust among users and fighting the black-box nature of machine learning models. In general, XAI enhances accountability and reliability in machine learning models. For a long time, tech giants like Google, IBM and others have poured resources on explainable AI to explain the decision-making process of such models. Below are the top free resources to understand Explainable AI (XAI) in detail. About: Explainable Machine Learning with LIME and H2O in R is a hands-on, guided introduction to explainable machine learning.


Explainable Goal-Driven Agents and Robots -- A Comprehensive Review

arXiv.org Artificial Intelligence

Recent applications of autonomous agents and robots, such as self-driving cars, scenario-based trainers, exploration robots, and service robots have brought attention to crucial trust-related challenges associated with the current generation of artificial intelligence (AI) systems. AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes. Without symbolic interpretation capabilities, they are black boxes, which renders their decisions or actions opaque, making it difficult to trust them in safety-critical applications. The recent stance on the explainability of AI systems has witnessed several approaches on eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences. Studies addressing the increasingly pervasive goal-driven agents and robots are still missing. This paper reviews approaches on explainable goal-driven intelligent agents and robots, focusing on techniques for explaining and communicating agents perceptual functions (example, senses, and vision) and cognitive reasoning (example, beliefs, desires, intention, plans, and goals) with humans in the loop. The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability. Finally, the paper presents requirements for explainability and suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.


An Explanation for eXplainable AI

#artificialintelligence

Artificial intelligence (AI) has been integrated into every part of our lives. A chatbot, enabled by advanced Natural language processing (NLP), pops to assist you while you surf a webpage. A voice recognition system can authenticate you in order to unlock your account. A drone or driverless car can service operations or access areas that are humanly impossible. Machine-learning (ML) predictions are utilized to all kinds of decision making.


A 20-Year Community Roadmap for Artificial Intelligence Research in the US

arXiv.org Artificial Intelligence

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.


Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI

arXiv.org Artificial Intelligence

This is an integrative review that address the question, "What makes for a good explanation?" with reference to AI systems. Pertinent literatures are vast. Thus, this review is necessarily selective. That said, most of the key concepts and issues are expressed in this Report. The Report encapsulates the history of computer science efforts to create systems that explain and instruct (intelligent tutoring systems and expert systems). The Report expresses the explainability issues and challenges in modern AI, and presents capsule views of the leading psychological theories of explanation. Certain articles stand out by virtue of their particular relevance to XAI, and their methods, results, and key points are highlighted. It is recommended that AI/XAI researchers be encouraged to include in their research reports fuller details on their empirical or experimental methods, in the fashion of experimental psychology research reports: details on Participants, Instructions, Procedures, Tasks, Dependent Variables (operational definitions of the measures and metrics), Independent Variables (conditions), and Control Conditions.