16009ce3d8a6872d79f056c75618911d-Paper-Conference.pdf

Neural Information Processing Systems 

Many important datasets contain samples that are missing one or more feature values. Maintaining the interpretability of machine learning models in the presence of such missing data is challenging. Singly or multiply imputing missing values complicates the model's mapping from features to labels. On the other hand, reasoning on indicator variables that represent missingness introduces a potentially largenumber ofadditional terms, sacrificing sparsity.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found