Response to

Neural Information Processing Systems 

Methods 2.3), and these absolute gradients are smoothed, thus it can handle negative Note that our epochs are large (e.g. for K562 We train binary SPI1 models with the smoothness prior [Erion et al., 2019] over several random We perform this same comparison with the "sparsity" prior defined in Erion et al. [2019], Thus, to be consistent in our comparisons, we trained our "no Note that profile models aren't typically trained with L1/L2/dropout, per Harmoniously merging our prior with traditional regularization is a good direction for future work. Why did the auPRC of peak overlap not improve in some cases with the Fourier-based prior? Fourier-based prior's improvements were consistent and statistically significant in the vast majority of experiments (as We emphasize that this is not a failure of the prior, but a symptom intrinsic to the binary architecture. Why does penalizing gradients improve DeepSHAP scores?

Similar Docs  Excel Report  more

TitleSimilaritySource
None found