What Are Explainable AI Principles


Explainable AI (XAI) principles are a set of guidelines for the fundamental properties that explainable AI systems should adopt. Explainable AI seeks to explain the way that AI systems work. These four principles capture a variety of disciplines that contribute to explainable AI, including computer science, engineering and psychology. The four explainable AI principles apply individually, so the presence of one does not imply that others will be present. The NIST suggests that each principle should be evaluated in its own right.

Duplicate Docs Excel Report

None found

Similar Docs  Excel Report  more

None found