Review for NeurIPS paper: Sparse Learning with CART

Neural Information Processing Systems 

Summary and Contributions: This paper sought to investigate theoretical and statistical properties of the CART methodology that are often overlooked. It provides an in-depth explanation and analysis of a CART algorithm, allowing others to replicate it. In doing so, it proves the reduction in training error among every recursive binary split. Additionally, models trained with this methodology have a high probability of having a bounded training error, even with arbitrary response sparsity. Proving the effectiveness of the model is important to anyone who is seeking to make a data-dependent decision.