Toward a Better Understanding of Leaderboard

arXiv.org Machine Learning

The leaderboard in machine learning competitions is a tool to show the performance of various participants and to compare them. However, the leaderboard quickly becomes no longer accurate, due to hack or overfitting. This article gives two pieces of advice to prevent easy hack or overfitting. By following these advice, we reach the conclusion that something like the Ladder leaderboard introduced in [blum2015ladder] is inevitable. With this understanding, we naturally simplify Ladder by eliminating its redundant computation and explain how to choose the parameter and interpret it. We also prove that the sample complexity is cubic to the desired precision of the leaderboard.


How to Host a Data Competition: Statistical Advice for Design and Analysis of a Data Competition

arXiv.org Machine Learning

Data competitions rely on real-time leaderboards to rank competitor entries and stimulate algorithm improvement. While such competitions have become quite popular and prevalent, particularly in supervised learning formats, their implementations by the host are highly variable. Without careful planning, a supervised learning competition is vulnerable to overfitting, where the winning solutions are so closely tuned to the particular set of provided data that they cannot generalize to the underlying problem of interest to the host. This paper outlines some important considerations for strategically designing relevant and informative data sets to maximize the learning outcome from hosting a competition based on our experience. It also describes a post-competition analysis that enables robust and efficient assessment of the strengths and weaknesses of solutions from different competitors, as well as greater understanding of the regions of the input space that are well-solved. The post-competition analysis, which complements the leaderboard, uses exploratory data analysis and generalized linear models (GLMs). The GLMs not only expand the range of results we can explore, they also provide more detailed analysis of individual sub-questions including similarities and differences between algorithms across different types of scenarios, universally easy or hard regions of the input space, and different learning objectives. When coupled with a strategically planned data generation approach, the methods provide richer and more informative summaries to enhance the interpretation of results beyond just the rankings on the leaderboard. The methods are illustrated with a recently completed competition to evaluate algorithms capable of detecting, identifying, and locating radioactive materials in an urban environment.


Acoustic Scene Classification: A Competition Review

arXiv.org Machine Learning

In this paper we study the problem of acoustic scene classification, i.e., categorization of audio sequences into mutually exclusive classes based on their spectral content. We describe the methods and results discovered during a competition organized in the context of a graduate machine learning course; both by the students and external participants. We identify the most suitable methods and study the impact of each by performing an ablation study of the mixture of approaches. We also compare the results with a neural network baseline, and show the improvement over that. Finally, we discuss the impact of using a competition as a part of a university course, and justify its importance in the curriculum based on student feedback.


AutoML @ NeurIPS 2018 challenge: Design and Results

arXiv.org Machine Learning

Machine learning has achieved great successes in online advertising, recommender systems, financial market analysis, computer vision, computational linguistics, bioinformatics and many other fields. However, its success crucially relies on human machine learning experts, as human experts are involved to some extent, in all systems design stages. In fact, it is still common for humans to take critical decisions in aspects like: converting a real world problem into a machine learning one, data gathering, formatting and preprocessing, feature engineering, selecting or designing model architectures, hyper-parameter tuning, assessment of model performance, deploying online ML systems, among others.


March Machine Learning Mania, 1st Place Winner's Interview: Andrew Landgraf

#artificialintelligence

Kaggle's 2017 March Machine Learning Mania competition challenged Kagglers to do what millions of sports fans do every year–try to predict the winners and losers of the US men's college basketball tournament. In this winner's interview, 1st place winner, Andrew Landgraf, describes how he cleverly analyzed his competition to optimize his luck. I am interested in sports analytics and have followed the previous competitions on Kaggle. Reading last year's winner's interview, I realized that luck is a major component of winning this competition, just like all brackets. I wanted to see if there was a way of maximizing my luck.