Goto

Collaborating Authors

 exclusivity


Mutual exclusivity as a challenge for deep neural networks

Neural Information Processing Systems

Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not vanilla neural architectures have an ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation. We demonstrate that there is a compelling case for designing task-general neural networks that learn through mutual exclusivity, which remains an open challenge.





Mutual exclusivity as a challenge for deep neural networks

Neural Information Processing Systems

Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not vanilla neural architectures have an ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation. We demonstrate that there is a compelling case for designing task-general neural networks that learn through mutual exclusivity, which remains an open challenge.


Graphint: Graph-based Time Series Clustering Visualisation Tool

Boniol, Paul, Tiano, Donato, Bonifati, Angela, Palpanas, Themis

arXiv.org Artificial Intelligence

With the exponential growth of time series data across diverse domains, there is a pressing need for effective analysis tools. Time series clustering is important for identifying patterns in these datasets. However, prevailing methods often encounter obstacles in maintaining data relationships and ensuring interpretability. We present Graphint, an innovative system based on the $k$-Graph methodology that addresses these challenges. Graphint integrates a robust time series clustering algorithm with an interactive tool for comparison and interpretation. More precisely, our system allows users to compare results against competing approaches, identify discriminative subsequences within specified datasets, and visualize the critical information utilized by $k$-Graph to generate outputs. Overall, Graphint offers a comprehensive solution for extracting actionable insights from complex temporal datasets.


$k$-Graph: A Graph Embedding for Interpretable Time Series Clustering

Boniol, Paul, Tiano, Donato, Bonifati, Angela, Palpanas, Themis

arXiv.org Artificial Intelligence

Time series clustering poses a significant challenge with diverse applications across domains. A prominent drawback of existing solutions lies in their limited interpretability, often confined to presenting users with centroids. In addressing this gap, our work presents $k$-Graph, an unsupervised method explicitly crafted to augment interpretability in time series clustering. Leveraging a graph representation of time series subsequences, $k$-Graph constructs multiple graph representations based on different subsequence lengths. This feature accommodates variable-length time series without requiring users to predetermine subsequence lengths. Our experimental results reveal that $k$-Graph outperforms current state-of-the-art time series clustering algorithms in accuracy, while providing users with meaningful explanations and interpretations of the clustering outcomes.


Mutual exclusivity as a challenge for deep neural networks

Neural Information Processing Systems

Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not vanilla neural architectures have an ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation. We demonstrate that there is a compelling case for designing task-general neural networks that learn through mutual exclusivity, which remains an open challenge.


Towards an Improved Metric for Evaluating Disentangled Representations

Julka, Sahib, Wang, Yashu, Granitzer, Michael

arXiv.org Artificial Intelligence

As defined by Bengio et al. [1], representation recent scholarly reviews on the topic [8, 7]. Accordingly, learning transforms observations into a format that captures a metric designed to quantify modularity and compactness the essence of data's inherent patterns and structures. An should also assess informativeness i.e., the extent to which ideal representation should exhibit five key characteristics: (a) latent codes encapsulate information about generative factors. Disentanglement, ensuring separate encoding of interpretable When the ground truth factors of variation are identifiable, factors; (b) Informativeness, capturing the diversity of data; (c) this informativeness transforms into explicitness, denoting the Invariance, maintaining stability across changes in unrelated comprehensive representation of all recognised factors [9].


Visually Grounded Speech Models have a Mutual Exclusivity Bias

Nortje, Leanne, Oneaţă, Dan, Matusevych, Yevgen, Kamper, Herman

arXiv.org Artificial Intelligence

When children learn new words, they employ constraints such as the mutual exclusivity (ME) bias: a novel word is mapped to a novel object rather than a familiar one. This bias has been studied computationally, but only in models that use discrete word representations as input, ignoring the high variability of spoken words. We investigate the ME bias in the context of visually grounded speech models that learn from natural images and continuous speech audio. Concretely, we train a model on familiar words and test its ME bias by asking it to select between a novel and a familiar object when queried with a novel word. To simulate prior acoustic and visual knowledge, we experiment with several initialisation strategies using pretrained speech and vision networks. Our findings reveal the ME bias across the different initialisation approaches, with a stronger bias in models with more prior (in particular, visual) knowledge. Additional tests confirm the robustness of our results, even when different loss functions are considered.