The Responsibility Problem in Neural Networks with Unordered Targets
Hayes, Ben, Saitis, Charalampos, Fazekas, György
–arXiv.org Artificial Intelligence
We discuss the discontinuities that arise when mapping unordered objects to neural network outputs of fixed permutation, referred to as the responsibility problem. Prior work has proved the existence of the issue by identifying a single discontinuity. Here, we show that discontinuities under such models are uncountably infinite, motivating further research into neural networks for unordered data. The responsibility problem (Zhang et al., 2020b) describes an issue when training neural networks with unordered targets: the fixed permutation of output units requires that each assume a "responsibility" for some element. For feed-forward networks, the worst-case approximation of such discontinuous functions is arbitrarily poor for at least some subset of the input space (Kratsios & Zamanlooy, 2022) Empirically, degraded performance has been observed on set prediction tasks (Zhang et al., 2020a), motivating research into architectures for set generation which circumvent these discontinuities (Zhang et al., 2020a; Kosiorek et al., 2020; Rezatofighi et al., 2018).
arXiv.org Artificial Intelligence
Apr-19-2023