control bias
the value of generative adversarial training for model-based reinforcement learning (RL) with offline data, especially
First, we sincerely thank all reviewers for their thoughtful comments and suggestions. We will report the variance and statistical significance of our empirical results in our revision. These shed light on the approach's effectiveness as an online recommender. These two factors help control bias in value estimation for model-based RL. Please refer to Line 9-15 for our responses to possible new empirical evaluations.
Diversity by Design: Addressing Mode Collapse Improves scRNA-seq Perturbation Modeling on Well-Calibrated Metrics
Mejia, Gabriel M., Miller, Henry E., Leblanc, Francis J. A., Wang, Bo, Swain, Brendan, Camillo, Lucas Paulo de Lima
Recent benchmarks reveal that models for single-cell perturbation response are often outperformed by simply predicting the dataset mean. We trace this anomaly to a metric artifact: control-referenced deltas and unweighted error metrics reward mode collapse whenever the control is biased or the biological signal is sparse. Large-scale \textit{in silico} simulations and analysis of two real-world perturbation datasets confirm that shared reference shifts, not genuine biological change, drives high performance in these evaluations. We introduce differentially expressed gene (DEG)-aware metrics, weighted mean-squared error (WMSE) and weighted delta $R^{2}$ ($R^{2}_{w}(Δ)$) with respect to all perturbations, that measure error in niche signals with high sensitivity. We further introduce negative and positive performance baselines to calibrate these metrics. With these improvements, the mean baseline sinks to null performance while genuine predictors are correctly rewarded. Finally, we show that using WMSE as a loss function reduces mode collapse and improves model performance.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.46)