Goto

Collaborating Authors

 conclusion


Do covariates explain why these groups differ? The choice of reference group can reverse conclusions in the Oaxaca-Blinder decomposition

Quintero, Manuel, Shreekumar, Advik, Stephenson, William T., Broderick, Tamara

arXiv.org Machine Learning

Scientists often want to explain why an outcome is different in two groups. For instance, differences in patient mortality rates across two hospitals could be due to differences in the patients themselves (covariates) or differences in medical care (outcomes given covariates). The Oaxaca--Blinder decomposition (OBD) is a standard tool to tease apart these factors. It is well known that the OBD requires choosing one of the groups as a reference, and the numerical answer can vary with the reference. To the best of our knowledge, there has not been a systematic investigation into whether the choice of OBD reference can yield different substantive conclusions and how common this issue is. In the present paper, we give existence proofs in real and simulated data that the OBD references can yield substantively different conclusions and that these differences are not entirely driven by model misspecification or small data. We prove that substantively different conclusions occur in up to half of the parameter space, but find these discrepancies rare in the real-data analyses we study. We explain this empirical rarity by examining how realistic data-generating processes can be biased towards parameters that do not change conclusions under the OBD.


Can Michael Pollan crack the problem of consciousness in his new book?

New Scientist

Can Michael Pollan crack the problem of consciousness in his new book? It is one of the most perplexing questions in science. You would expect our intimacy with it to give us a leg up in understanding how it works, but this has proven to be more of a hindrance than a help. So how can you study something objectively when it is also the very tool you are using to do the studying? This conundrum forms the backbone of Michael Pollan's latest book, Pollan's previous works include and The former helped bring the environmental and animal welfare impacts of the US food system to light, while the latter introduced the public to the psychedelic research renaissance.




The Crucial Role of Normalization in Sharpness-Aware Minimization Yan Dai

Neural Information Processing Systems

Sharpness-A ware Minimization (SAM) is a recently proposed gradient-based optimizer (Foret et al., ICLR 2021) that greatly improves the prediction performance of deep neural networks.