Goto

Collaborating Authors

 sanity check






Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?

Neural Information Processing Systems

There have been long-standing controversies and inconsistencies over the experiment setup and criteria for identifying the winning ticket in literature. To reconcile such, we revisit the definition of lottery ticket hypothesis, with comprehensive and more rigorous conditions. Under our new definition, we show concrete evidence to clarify whether the winning ticket exists across the major DNN architectures and/or applications. Through extensive experiments, we perform quantitative analysis on the correlations between winning tickets and various experimental factors, and empirically study the patterns of our observations. We find that the key training hyperparameters, such as learning rate and training epochs, as well as the architecture characteristics such as capacities and residual connections, are all highly correlated with whether and when the winning tickets can be identified. Based on our analysis, we summarize a guideline for parameter settings in regards of specific architecture characteristics, which we hope to catalyze the research progress on the topic of lottery ticket hypothesis.


Sanity Checks for Saliency Maps

Neural Information Processing Systems

Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. In this work, we propose an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. We find that reliance, solely, on visual assessment can be misleading. Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, and debugging the model. We interpret our findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings.



uses the final accuracy of the SGD as a sanity check for the quality of models trained with AutoAssist (e.g.g, BLEU

Neural Information Processing Systems

We thank the reviewers for their comments. We will carefully modify the paper according to the suggestions.Figure 1: Comparison of different learning schemes on RotMNIST classification and IWSL T translation tasks. For the NMT tasks, we used the same parameter settings from previous papers, as described in section 5.2. Assistant model shows similar performance over different batch sizes. However, we will provide results on raw ImageNet dataset and large Transformer model in the revised version.