Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. What if you could forecast the accuracy of the neural network earlier thanks to accumulated experience and approximation? This was the goal of a recent project at IBM Research and the result is TAPAS or Train-less Accuracy Predictor for Architecture Search (click for demo). Its trick is that it can estimate, in fractions of a second, classification performance for unseen input datasets, without training for both image and text classification. In contrast to previously proposed approaches, TAPAS is not only calibrated on the topological network information, but also on the characterization of the dataset difficulty, which allows us to re-tune the prediction without any training.
Feb-17-2019, 17:03:17 GMT