Goto

Collaborating Authors

Restoring Negative Information in Few-Shot Object Detection

arXiv.org Artificial Intelligence

Few-shot learning has recently emerged as a new challenge in the deep learning field: unlike conventional methods that train the deep neural networks (DNNs) with a large number of labeled data, it asks for the generalization of DNNs on new classes with few annotated samples. Recent advances in few-shot learning mainly focus on image classification while in this paper we focus on object detection. The initial explorations in few-shot object detection tend to simulate a classification scenario by using the positive proposals in images with respect to certain object class while discarding the negative proposals of that class. Negatives, especially hard negatives, however, are essential to the embedding space learning in few-shot object detection. In this paper, we restore the negative information in few-shot object detection by introducing a new negative-and positive-representative based metric learning framework and a new inference scheme with negative and positive representatives. We build our work on a recent few-shot pipeline RepMet [1] with several new modules to encode negative information for both training and testing. Extensive experiments on ImageNet-LOC and PASCAL VOC show our method substantially improves the state-of-the-art few-shot object detection solutions.


6 Metrics You Need to Optimize for Performance in Machine Learning - DZone AI

#artificialintelligence

There are many metrics to measure the performance of your machine learning model depending on the type of machine learning you are looking to conduct. In this article, we take a look at performance measures for classification and regression models and discuss which is better-optimized. Sometimes the metric to look at will vary according to the problem that is initially being solved. The True Positive Rate, also called Recall, is the go-to performance measure in binary/non-binary classification problems. Most of the time -- if not all of the time -- we are only interested in correctly predicting one class.


Blogs

@machinelearnbot

Most tasks in Machine Learning can be reduced to classification tasks. For example, we have a medical dataset and we want to classify who has diabetes (positive class) and who doesn't (negative class). We have a dataset from the financial world and want to know which customers will default on their credit (positive class) and which customers will not (negative class). To do this, we can train a Classifier with a'training dataset' and after such a Classifier is…


The 6 Metrics You Need to Optimize for Performance in Machine Learning

#artificialintelligence

There are many metrics to measure the performance of your model depending on the type of machine learning you are looking to conduct. In this article, we take a look at performance measures for classification and regression models and discuss which is better optimized. Sometimes the metric to look at will vary according to the problem that is initially being solved. The True Positive Rate also called Recall is the go-to performance measure in binary/non-binary classification problems. Most if not all the time, we are only interested in correctly predicting one class.


The Basics: evaluating classifiers

#artificialintelligence

Judging a classification model feels like it should be an easier task than judging a regression. After all, your prediction from a classification model can only either be right or wrong, while a prediction from a regression model can be more or less wrong, can have any level of error, high or low. Yet, judging a classification is not as simple as it may seem. There's more than one way for a classification to be right or to be wrong, and multiple ways to combine the different ways to be right and wrong into a unified metric. Of course, all these different metrics have different, frequently unintuitive names -- precision, recall, F1, ROC curves -- making the process seem a little forbidding from the outside.