Explaining Black-Box Models through Counterfactuals
Altmeyer, Patrick, van Deursen, Arie, Liem, Cynthia C. S.
–arXiv.org Artificial Intelligence
Machine Learning models like Deep Neural Networks have become so complex and opaque over recent years that they are generally considered black-box systems. This lack of transparency exacerbates several other problems typically associated with these models: they tend to be unstable (Goodfellow, Shlens, and Szegedy 2014), encode existing biases (Buolamwini and Gebru 2018) and learn representations that are surprising or even counter-intuitive from a human perspective (Sturm 2014). Nonetheless, they often form the basis for data-driven decision-making systems in real-world applications. As others have pointed out, this scenario gives rise to an undesirable principal-agent problem involving a group of principals--i.e.
arXiv.org Artificial Intelligence
Aug-14-2023
- Country:
- North America > United States (0.14)
- Genre:
- Overview (0.93)
- Research Report (1.00)
- Industry:
- Banking & Finance (1.00)
- Transportation > Air (0.62)
- Technology: