SGM: A Statistical Godel Machine for Risk-Controlled Recursive Self-Modification
Wu, Xuening, Yin, Shenqin, Kang, Yanlan, Zhang, Xinhang, Xu, Qianya, Chen, Zeping, Zhang, Wenqiang
–arXiv.org Artificial Intelligence
Recursive self-modification has often been discussed as a cornerstone for building continually improving ML systems (Y ampolskiy, 2015). Modern ML already hints at this trend: reinforcement learning agents tune hyperparameters online, AutoML loops search over training recipes, and optimization pipelines reconfigure code and settings during runs. Y et these procedures often adopt changes on the basis of noisy gains, creating the risk of harmful edits - modifications that seems beneficial in finite trials but ultimately degrade true performance. Such risks are especially concerning in high-stakes scientific domains such as drug design, protein engineering, or climate modeling, where spurious gains can misdirect costly pipelines. G odel machines (Schmidhuber, 2007) offer a conceptually clean answer: an agent rewrites its code only when it can prove the rewrite increases expected utility. But in stochastic, high-dimensional ML, such formal proofs are unattainable. At the other extreme, practical AutoML and RL systems adopt edits using heuristics such as rolling averages, best-of-seeds, or bandit rules, which lack guarantees and may silently accumulate regressions.
arXiv.org Artificial Intelligence
Oct-14-2025
- Country:
- Asia > China
- North America > United States
- California > San Diego County > San Diego (0.04)
- Genre:
- Research Report (1.00)
- Technology: