Goto

Collaborating Authors

 negative data







Positive-Unlabeled Learning using Random Forests via Recursive Greedy Risk Minimization

Neural Information Processing Systems

The need to learn from positive and unlabeled data, or PU learning, arises in many applications and has attracted increasing interest. While random forests are known to perform well on many tasks with positive and negative data, recent PU algorithms are generally based on deep neural networks, and the potential of tree-based PU learning is under-explored. In this paper, we propose new random forest algorithms for PU-learning. Key to our approach is a new interpretation of decision tree algorithms for positive and negative data as \emph{recursive greedy risk minimization algorithms}. We extend this perspective to the PU setting to develop new decision tree learning algorithms that directly minimizes PU-data based estimators for the expected risk. This allows us to develop an efficient PU random forest algorithm, PU extra trees. Our approach features three desirable properties: it is robust to the choice of the loss function in the sense that various loss functions lead to the same decision trees; it requires little hyperparameter tuning as compared to neural network based PU learning; it supports a feature importance that directly measures a feature's contribution to risk minimization. Our algorithms demonstrate strong performance on several datasets.





Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Summary: The paper presents two novel convolutional neural network architectures for modeling sentences in natural languages. These networks are trained specifically for the problem of matching a pair of sentences. The first architecture is a minor modification to the standard way of using a convolutional network over natural language sentences. After a convolution operation on the word embeddings, instead of doing a pooling operation across time (full sequence of words in a sentence) to select a single feature (or k features), the proposed model applies pooling to features associated with consecutive pairs of words.