Multicriteria interpretability driven Deep Learning
–arXiv.org Artificial Intelligence
Recent software and hardware democratized DL methods allowing scholars and practitioners to apply them in their fields. On the software side, recent frameworks as Tensorflow (Abadi et al., 2015) and PyTorch (Paszke et al., 2019) allowed to create complex DL models avoiding the need to write ad-hoc compilers as did by LeCun et al. (1990). On the hardware side, the decrease in the cost of the necessary hardware to train such models, allowed many people to build and deploy sophisticated Neural Networks with minimal costs (Zhang et al., 2018). The democratization of such powerful technologies allowed many fields to benefit from it aside from computer science. Some of those that benefitted the most are Economics (Nosratabadi et al., 2020), and Finance (Ozbayoglu et al., 2020). DL applications have piqued the interest of governments, who are concerned about possible social implications. It is well known that these models necessitate extra vigilance when it comes to training data in order to minimize biases of any kind, especially in high-stakes judgments (Rudin, 2019). To counter these side effects, the governments enacted several regulatory standards, and the jurisprudence started to elaborate on the right to explanation concept (Dexe et al., 2020). In this effort to build interpretable but DL grounded models, scholars have started developing post-hoc interpretation methods.
arXiv.org Artificial Intelligence
Nov-28-2021
- Country:
- Europe
- Austria > Vienna (0.14)
- Italy > Lombardy
- Milan (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.14)
- North America > United States
- Massachusetts
- Middlesex County > Cambridge (0.04)
- Suffolk County > Boston (0.04)
- New York > New York County
- New York City (0.04)
- Massachusetts
- Europe
- Genre:
- Research Report (1.00)
- Industry:
- Banking & Finance (1.00)
- Information Technology (0.68)
- Technology: