Collaborating Authors

A Topological Approach for Semi-Supervised Learning


Nowadays, Machine Learning and Deep Learning methods have become the state-of-the-art approach to solve data classification tasks. In order to use those methods, it is necessary to acquire and label a considerable amount of data; however, this is not straightforward in some fields, since data annotation is time consuming and might require expert knowledge. This challenge can be tackled by means of semi-supervised learning methods that take advantage of both labelled and unlabelled data. In this work, we present new semi-supervised learning methods based on techniques from Topological Data Analysis (TDA), a field that is gaining importance for analysing large amounts of data with high variety and dimensionality. In particular, we have created two semi-supervised learning methods following two different topological approaches.

Which machine learning algorithm to choose for my problem ? - Recast.AI Blog


We frequently hear about Machine Learning in the media, especially since the recent wave of interest in deep-learning. The perpetual improvement of Machine Learning techniques combined with the ever increasing amount of data that are stored suggests endless new applications. Many innovative solutions emerge: autonomous driving, next generation supermarkets with implicit payment, next generation chatbots that can interact with you as human beings would do, and so on. More than ever, the future seems within reach. But the more extravagant and original the application is, the more the layman is put off.

A Lightly Supervised Approach to Role Identification in Wikipedia Talk Page Discussions

AAAI Conferences

In this paper we describe an application of a lightly supervised Role Identification Model (RIM) to the analysis of coordination in Wikipedia talk page discussions. Our goal is to understand the substance of important coordination roles that predict quality of the Wikipedia pages where the discussions take place. Using the model as a lens, we present an analysis of four important coordination roles identified using the model, including Workers, Critiquers, Encouragers, and Managers.

Self-Supervised Learning in Vision Transformers


Anyone who has ever approached the world of machine learning has certainly heard of supervised learning and unsupervised learning. These are in fact two important possible approaches to Machine Learning that have been widely used for years. Only recently, however, has there been an explosion of a new term, Self-Supervised Learning! But let's get there step by step and look at the various methods one by one, trying to find an analogy with the human brain. Supervised Learning is like "learning based on labelled examples".

A Rate Distortion Approach for Semi-Supervised Conditional Random Fields

Neural Information Processing Systems

We propose a novel information theoretic approach for semi-supervised learning of conditional random fields. Our approach defines a training objective that combines the conditional likelihood on labeled data and the mutual information on unlabeled data. Different from previous minimum conditional entropy semi-supervised discriminative learning methods, our approach can be naturally cast into the rate distortion theory framework in information theory. We analyze the tractability of the framework for structured prediction and present a convergent variational training algorithm to defy the combinatorial explosion of terms in the sum over label configurations. Papers published at the Neural Information Processing Systems Conference.