Review for NeurIPS paper: NeuMiss networks: differentiable programming for supervised learning with missing values.

Neural Information Processing Systems 

Summary and Contributions: The paper derives analytical expressions of optimal predictors in the presence of Missing Completely At Random (MCAR), Missing At Random (MAR) and self-masking missingness in the linear Gaussian case. Then, the paper proposes Neumann Network for learning the optimal predictor in the MAR case and show the insights and connection to the neural network with ReLU activations. There are two challenges of learning the optimal predicator from data containing missing values: 1) computing the inversion of covariance matrices in the MAR optimal predicator; 2) 2 d optimal predictors with different missingness patterns required to learn the optimal predictor, where d is the number of features/covariates. For the first one, the paper provides a theoretical analysis, which is approximated in a recursive manner with the convergence and upper bounder guarantee. For the second one, the Neumann Network shares the weights of optimal predictors with different missing patterns, which turns out empirically more data efficient and robust to self-masking missingness cases.