Goto

Collaborating Authors

Crowdclass: Designing Classification-Based Citizen Science Learning Modules

AAAI Conferences

In this paper, we introduce Crowdclass, a novel framework that integrates the learning of advanced scientific concepts with the crowdsourcing microtask of image classification. In Crowdclass, we design questions to serve as both a learning experience and a scientific classification. This is different from conventional citizen science platforms which decompose high level questions into a series of simple microtasks that require no scientific background knowledge to complete. We facilitate learning within the microtask by providing content that is appropriate for the participant’s level of knowledge through scaffolding learning. We conduct a between-group study of 93 participants on Amazon Mechanical Turk comparing Crowdclass to the popular citizen science project Galaxy Zoo. We find that the scaffolding presentation of content enables learning of more challenging concepts. By understanding the relationship between user motivation, learning, and performance, we draw general design principles for learning-as-an-incentive interventions applicable to other crowdsourcing applications.


Machine Learning Inside Google - SEO by the Sea

#artificialintelligence

When I was in high school, one of the required classes I had to take was a shop class. I had been taking mostly what the school called "enriched" courses, or what were mostly academic classes that featured primarily reading, writing, and arithmetic. A shop class had more of a trade focus. I was surprised when the first lesson on the first day of my shop class was a richer academic experience than any of the enriched classes I had taken. The instructor started talking about systems, and how many manufacturing processes involved breaking products down into different systems.


Network Transplanting

arXiv.org Machine Learning

This paper focuses on a novel problem, i.e., transplanting a category-and-task-specific neural network to a generic, distributed network without strong supervision. Like playing LEGO blocks, incrementally constructing a generic network by asynchronously merging specific neural networks is a crucial bottleneck for deep learning. Suppose that the pre-trained specific network contains a module $f$ to extract features of the target category, and the generic network has a module $g$ for a target task, which is trained using other categories except for the target category. Instead of using numerous training samples to teach the generic network a new category, we aim to learn a small adapter module to connect $f$ and $g$ to accomplish the task on a target category in a weakly-supervised manner. The core challenge is to efficiently learn feature projections between the two connected modules. We propose a new distillation algorithm, which exhibited superior performance. Our method without training samples even significantly outperformed the baseline with 100 training samples.


Few-shot Learning with LSSVM Base Learner and Transductive Modules

arXiv.org Machine Learning

The performance of meta-learning approaches for few-shot learning generally depends on three aspects: features suitable for comparison, the classifier ( base learner ) suitable for low-data scenarios, and valuable information from the samples to classify. In this work, we make improvements for the last two aspects: 1) although there are many effective base learners, there is a trade-off between generalization performance and computational overhead, so we introduce multi-class least squares support vector machine as our base learner which obtains better generation than existing ones with less computational overhead; 2) further, in order to utilize the information from the query samples, we propose two simple and effective transductive modules which modify the support set using the query samples, i.e., adjusting the support samples basing on the attention mechanism and adding the prototypes of the query set with pseudo labels to the support set as the pseudo support samples. These two modules significantly improve the few-shot classification accuracy, especially for the difficult 1-shot setting. Our model, denoted as FSLSTM (Few-Shot learning with LSsvm base learner and Transductive Modules), achieves state-of-the-art performance on miniImageNet and CIFAR-FS few-shot learning benchmarks.


Examples -- scikit-learn 0.17.1 documentation

#artificialintelligence

This documentation is for scikit-learn version 0.17.1 -- Other versions If you use the software, please consider citing scikit-learn. Applications to real world problems with some medium sized datasets or interactive user interface. Examples illustrating the calibration of predicted probabilities of classifiers.