Unified Explanations in Machine Learning Models: A Perturbation Approach
Dineen, Jacob, Kridel, Don, Dolk, Daniel, Castillo, David
–arXiv.org Artificial Intelligence
This communication problem has migrated into the A high-velocity paradigm shift towards Explainable arena of machine learning (ML) and artificial Artificial Intelligence (XAI) has emerged in recent intelligence (AI) in recent times, giving rise to the need years. Highly complex Machine Learning (ML) models for and subsequent emergence of Explainable AI have flourished in many tasks of intelligence, and the (XAI). XAI has arisen from growing discontent with questions have started to shift away from traditional "black box" models, often in the form of neural metrics of validity towards something deeper: What is networks and other emergent, dynamic models (e.g., this model telling me about my data, and how is it agent-based simulation, genetic algorithms) that arriving at these conclusions? Inconsistencies between generate outcomes lacking in transparency. This has XAI and modeling techniques can have the undesirable also been studied through the lens of general machine effect of casting doubt upon the efficacy of these learning, where classic methods also face an explainability approaches. To address these problems, interpretability crisis for high dimensional inputs [1].
arXiv.org Artificial Intelligence
May-30-2024
- Country:
- Africa > Mozambique
- Gaza Province > Xai-Xai (0.24)
- North America > United States
- Missouri (0.14)
- Africa > Mozambique
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Health & Medicine (0.46)
- Technology: