fb89705ae6d743bf1e848c206e16a1d7-Reviews.html
–Neural Information Processing Systems
Overview: The authors propose the Gibbs error criterion for active learning; seeking the samples that maximize the expected Gibbs error under the current posterior. They propose a greedy algorithm that maximises this criterion (Max-GEC). The objective reduces to maximising a specific instance of the Tsallis entropy of the predictive distribution which is very similar to Maximum Entropy Sampling (MES) which uses the Shannon entropy of the predictive distribution. They consider the non-adaptive, adaptive and batch settings separately, and in each setting they prove using submodularity results that the greedy approach achieves near-maximal performance compared to optimal policy. They show how to implement their fully adaptive policy (approximately) in CRFs with application to named entity recognition, and implement the batch algorithm with a Naive Bayes classifier, with application to a text classification task.
Neural Information Processing Systems
Mar-14-2024, 01:07:02 GMT
- Technology: