Goto

Collaborating Authors

Distributed One-class Learning

arXiv.org Machine Learning

We propose a cloud-based filter trained to block third parties from uploading privacy-sensitive images of others to online social media. The proposed filter uses Distributed One-Class Learning, which decomposes the cloud-based filter into multiple one-class classifiers. Each one-class classifier captures the properties of a class of privacy-sensitive images with an autoencoder. The multi-class filter is then reconstructed by combining the parameters of the one-class autoencoders. The training takes place on edge devices (e.g. smartphones) and therefore users do not need to upload their private and/or sensitive images to the cloud. A major advantage of the proposed filter over existing distributed learning approaches is that users cannot access, even indirectly, the parameters of other users. Moreover, the filter can cope with the imbalanced and complex distribution of the image content and the independent probability of addition of new users. We evaluate the performance of the proposed distributed filter using the exemplar task of blocking a user from sharing privacy-sensitive images of other users. In particular, we validate the behavior of the proposed multi-class filter with non-privacy-sensitive images, the accuracy when the number of classes increases, and the robustness to attacks when an adversary user has access to privacy-sensitive images of other users.


Dealing with Unbalanced Classes in Machine Learning - deep ideas

#artificialintelligence

In many real-world classification problems, we stumble upon training data with unbalanced classes. This means that the individual classes do not contain the same number of elements. For example, if we want to build an image-based skin cancer detection system using convolutional neural networks, we might encounter a dataset with about 95% negatives and 5% positives. This is for good reasons: Images associated with a negative diagnosis are way more common than images with a positive diagnosis. Rather than regarding this as a flaw in the dataset, we should leverage the additional information that we get.


How to evaluate a machine learning model - part 4- Edvancer Eduventures

#artificialintelligence

This blog post is the continuation of my previous articles part 1, part 2 and part 3. Caution: The Difference Between Training Metrics and Evaluation Metrics Sometimes, the model training procedure uses a different metric (also known as a loss function) than the evaluation. This can happen in the instance when we are re-appropriating a model for a different task than it was designed for. For example, we might train a personalized recommender by minimizing the loss between its predictions and observed ratings, and then use this recommender to produce a ranked list of recommendations. This is not an optimal scenario. It makes the life of the model difficult by asking it to do a task that it was not trained to do.


Learning meters of Arabic and English poems with Recurrent Neural Networks: a step forward for language understanding and synthesis

arXiv.org Machine Learning

Recognizing a piece of writing as a poem or prose is usually easy for the majority of people; however, only specialists can determine which meter a poem belongs to. In this paper, we build Recurrent Neural Network (RNN) models that can classify poems according to their meters from plain text. The input text is encoded at the character level and directly fed to the models without feature handcrafting. This is a step forward for machine understanding and synthesis of languages in general, and Arabic language in particular. Among the 16 poem meters of Arabic and the 4 meters of English the networks were able to correctly classify poem with an overall accuracy of 96.38\% and 82.31\% respectively. The poem datasets used to conduct this research were massive, over 1.5 million of verses, and were crawled from different nontechnical sources, almost Arabic and English literature sites, and in different heterogeneous and unstructured formats. These datasets are now made publicly available in clean, structured, and documented format for other future research. To the best of the authors' knowledge, this research is the first to address classifying poem meters in a machine learning approach, in general, and in RNN featureless based approach, in particular. In addition, the dataset is the first publicly available dataset ready for the purpose of future computational research.


TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning

arXiv.org Machine Learning

Handling previously unseen tasks after given only a few training examples continues to be a tough challenge in machine learning. We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. The training loss is obtained based on a distance metric between the query and the reference vectors in the projection space. Excellent generalization results in this way. When tested on the Omniglot, miniImageNet and tieredImageNet datasets, we obtain state of the art classification accuracies under various few-shot scenarios.