Goto

Collaborating Authors

 control bias



the value of generative adversarial training for model-based reinforcement learning (RL) with offline data, especially

Neural Information Processing Systems

First, we sincerely thank all reviewers for their thoughtful comments and suggestions. We will report the variance and statistical significance of our empirical results in our revision. These shed light on the approach's effectiveness as an online recommender. These two factors help control bias in value estimation for model-based RL. Please refer to Line 9-15 for our responses to possible new empirical evaluations.


Diversity by Design: Addressing Mode Collapse Improves scRNA-seq Perturbation Modeling on Well-Calibrated Metrics

Mejia, Gabriel M., Miller, Henry E., Leblanc, Francis J. A., Wang, Bo, Swain, Brendan, Camillo, Lucas Paulo de Lima

arXiv.org Machine Learning

Recent benchmarks reveal that models for single-cell perturbation response are often outperformed by simply predicting the dataset mean. We trace this anomaly to a metric artifact: control-referenced deltas and unweighted error metrics reward mode collapse whenever the control is biased or the biological signal is sparse. Large-scale \textit{in silico} simulations and analysis of two real-world perturbation datasets confirm that shared reference shifts, not genuine biological change, drives high performance in these evaluations. We introduce differentially expressed gene (DEG)-aware metrics, weighted mean-squared error (WMSE) and weighted delta $R^{2}$ ($R^{2}_{w}(Δ)$) with respect to all perturbations, that measure error in niche signals with high sensitivity. We further introduce negative and positive performance baselines to calibrate these metrics. With these improvements, the mean baseline sinks to null performance while genuine predictors are correctly rewarded. Finally, we show that using WMSE as a loss function reduces mode collapse and improves model performance.


Control Bias and Eliminate Blind Spots in Machine Learning and Artificial Intelligence

#artificialintelligence

Sign in to view this research document. Machine bias is unavoidable; but it is manageable. By opening up the black boxes at the center of AI solutions, technical professionals can improve the quality and capability of their applications while also reducing risks to the enterprise.