62d75fb2e3075506e8837d8f55021ab1-AuthorFeedback.pdf
–Neural Information Processing Systems
We thank reviewers for thoughtful suggestions which we'll add to improve our paper and are glad they find this work From limited data, it's impossible for any method to accurately estimate nonparametric GIB-1,5,10 all greatly improve BASE models. We'll clarify Gibbs' mixing rate may slow for highly-correlated features (Wang We'll clarify consistency is not referring to our self-attention model, but As suggested by R1, we'll clarify: (1) per-feature conditional distribution is just univariate mixture of Gaussians, not Differences should be statistically significant where p < 0 .05 . As suggested by R2, we'll fix typos issues, use clearer captions, and clarify: (1) In Table 1: Rank is computed by We study how to improve upon latency of ensemble predictors while preserving their accuracy. Thus it is practically performant, and it is very broadly applicable. We'll clarify distillation is not AutoML tool's ensemble and any student model type (may be important if user has particular inference-accelerator / We don't know any accurate AutoML system for tabular data that offers cascades.
Neural Information Processing Systems
Oct-3-2025, 01:56:15 GMT