Goto

Collaborating Authors

 Scientific Discovery



On robust hypothesis testing with respect to Hellinger distance

Modak, Eeshan

arXiv.org Machine Learning

We study the hypothesis testing problem where the observed samples need not come from either of the specified hypotheses (distributions). In such a situation, we would like our test to be robust to this misspecification and output the distribution closer in Hellinger distance. If the underlying distribution is close to being equidistant from the hypotheses, then this would not be possible. Our main result is quantifying how close the underlying distribution has to be to either of the hypotheses. We also study the composite testing problem, where each hypothesis is a Hellinger ball around a fixed distribution. A generalized likelihood ratio test is known to work for this problem. We give an alternate test for the same.


Structural Enforcement of Statistical Rigor in AI-Driven Discovery: A Functional Architecture

Sargsyan, Karen

arXiv.org Artificial Intelligence

Sequential statistical protocols require meticulous state management and robust error handling -- challenges naturally suited to functional programming. We present a functional architecture for structural enforcement of statistical rigor in automated research systems (AI-Scientists). These LLM-driven systems risk generating spurious discoveries through dynamic hypothesis testing. We introduce the Research monad, a Haskell eDSL that enforces sequential statistical protocols (e.g., Online FDR (false discovery rate) control) using a monad transformer stack. To address risks in hybrid architectures where LLMs generate imperative code, we employ Declarative Scaffolding -- generating rigid harnesses that structurally constrain execution and prevent methodological errors like data leakage. We validate this approach through large-scale simulation (N=2000 hypotheses) and an end-to-end case study, demonstrating essential defense-in-depth for automated science integrity.


Medieval duke's remains recount his grisly murder

Popular Science

Science Archaeology Medieval duke's remains recount his grisly murder In 1272, Hungary's Béla of Macsó received over 23 sword gashes-and more. Breakthroughs, discoveries, and DIY tips sent every weekday. In 1272 CE, a Hungarian duke was murdered in cold blood. Details surrounding the grisly killing of the 13th century Hungarian duke named Béla of Macsó have remained murky for centuries. The duke met his demise at the hand of enemies, but far less is known about what motivated his killers or how the attack really unfolded.

  Country: Europe > Hungary > Budapest > Budapest (0.05)
  Genre: Research Report (0.36)
  Industry:
  Technology: Information Technology > Artificial Intelligence > Representation & Reasoning > Scientific Discovery (0.36)

Accelerating scientific discovery with the common task framework

Kutz, J. Nathan, Battaglia, Peter, Brenner, Michael, Carlberg, Kevin, Hagberg, Aric, Ho, Shirley, Hoyer, Stephan, Lange, Henning, Lipson, Hod, Mahoney, Michael W., Noe, Frank, Welling, Max, Zanna, Laure, Zhu, Francis, Brunton, Steven L.

arXiv.org Artificial Intelligence

Machine learning (ML) and artificial intelligence (AI) algorithms are transforming and empowering the characterization and control of dynamic systems in the engineering, physical, and biological sciences. These emerging modeling paradigms require comparative metrics to evaluate a diverse set of scientific objectives, including forecasting, state reconstruction, generalization, and control, while also considering limited data scenarios and noisy measurements. We introduce a common task framework (CTF) for science and engineering, which features a growing collection of challenge data sets with a diverse set of practical and common objectives. The CTF is a critically enabling technology that has contributed to the rapid advance of ML/AI algorithms in traditional applications such as speech recognition, language processing, and computer vision. There is a critical need for the objective metrics of a CTF to compare the diverse algorithms being rapidly developed and deployed in practice today across science and engineering.


Abductive Inference in Retrieval-Augmented Language Models: Generating and Validating Missing Premises

Lin, Shiyin

arXiv.org Artificial Intelligence

Large Language Models (LLMs) enhanced with retrieval -- commonly referred to as Retrieval-Augmented Generation (RAG) -- have demonstrated strong performance in knowledge-intensive tasks. However, RAG pipelines often fail when retrieved evidence is incomplete, leaving gaps in the reasoning process. In such cases, \emph{abductive inference} -- the process of generating plausible missing premises to explain observations -- offers a principled approach to bridge these gaps. In this paper, we propose a framework that integrates abductive inference into retrieval-augmented LLMs. Our method detects insufficient evidence, generates candidate missing premises, and validates them through consistency and plausibility checks. Experimental results on abductive reasoning and multi-hop QA benchmarks show that our approach improves both answer accuracy and reasoning faithfulness. This work highlights abductive inference as a promising direction for enhancing the robustness and explainability of RAG systems.


MOOSE-Chem3: Toward Experiment-Guided Hypothesis Ranking via Simulated Experimental Feedback

Liu, Wanhao, Yang, Zonglin, Wang, Jue, Bing, Lidong, Zhang, Di, Zhou, Dongzhan, Li, Yuqiang, Li, Houqiang, Cambria, Erik, Ouyang, Wanli

arXiv.org Artificial Intelligence

Hypothesis ranking is vital for automated scientific discovery, especially in cost-intensive, throughput-limited natural science domains. Current methods focus on pre-experiment ranking, relying solely on language model reasoning without empirical feedback. We introduce experiment-guided ranking, which prioritizes hypotheses based on feedback from prior tests. Due to the impracticality of real experiments, we propose a simulator grounded in domain-specific concepts that models hypothesis performance as a function of similarity to a hidden ground truth, perturbed by noise. Validated against 124 hypotheses with experimentally reported outcomes, the simulator approximates real results with consistent trend alignment. Although deviations exist, they mimic wet-lab noise, promoting more robust ranking strategies. We frame experiment-guided ranking as a sequential decision-making problem and propose an in-context reinforcement learning (ICRL) framework. Our LLM-based policy decomposes hypotheses into functional elements, clusters them by mechanistic roles, and prioritizes recombinations based on feedback. Experiments show our approach significantly outperforms pre-experiment baselines and strong ablations. Our toolkit, comprising the simulator and ICRL framework, enables systematic research on experiment-guided ranking, with the policy serving as a strong proof of concept.


AutoSciDACT: Automated Scientific Discovery through Contrastive Embedding and Hypothesis Testing

Bright-Thonney, Samuel, Reissel, Christina, Grosso, Gaia, Woodward, Nathaniel, Govorkova, Katya, Novak, Andrzej, Park, Sang Eon, Moreno, Eric, Harris, Philip

arXiv.org Machine Learning

Novelty detection in large scientific datasets faces two key challenges: the noisy and high-dimensional nature of experimental data, and the necessity of making statistically robust statements about any observed outliers. While there is a wealth of literature on anomaly detection via dimensionality reduction, most methods do not produce outputs compatible with quantifiable claims of scientific discovery. In this work we directly address these challenges, presenting the first step towards a unified pipeline for novelty detection adapted for the rigorous statistical demands of science. We introduce AutoSciDACT (Automated Scientific Discovery with Anomalous Contrastive Testing), a general-purpose pipeline for detecting novelty in scientific data. AutoSciDACT begins by creating expressive low-dimensional data representations using a contrastive pre-training, leveraging the abundance of high-quality simulated data in many scientific domains alongside expertise that can guide principled data augmentation strategies. These compact embeddings then enable an extremely sensitive machine learning-based two-sample test using the New Physics Learning Machine (NPLM) framework, which identifies and statistically quantifies deviations in observed data relative to a reference distribution (null hypothesis). We perform experiments across a range of astronomical, physical, biological, image, and synthetic datasets, demonstrating strong sensitivity to small injections of anomalous data across all domains.


A New Paradigm for Protecting Homes from Disastrous Fires

The New Yorker

Scientists have identified more than fifty ways that houses can ignite. It's possible to defend against all of them--but it's arduous, and homeowners can't do it alone. In June, 2012, hundreds of homes in Mountain Shadows, Colorado, a subdivision in the foothills of the Rockies, were reduced to ash during the wind-whipped Waldo Canyon Fire. On a cul-de-sac called Hot Springs Court, however, four dwellings somehow remained standing. The mystery of their survival nagged at Alex Maranghides, a fire-protection engineer at the National Institute of Standards and Technology (), who worked with several colleagues on a meticulous reconstruction of the fire. How did the homes make it through? Was there something special about them--a fireproof roof, say, or a fancy sprinkler system? The team collected weather reports, topographic data, G.P.S. records from fire engines, photos, videos, and property-damage reports.


From AI for Science to Agentic Science: A Survey on Autonomous Scientific Discovery

Wei, Jiaqi, Yang, Yuejin, Zhang, Xiang, Chen, Yuhan, Zhuang, Xiang, Gao, Zhangyang, Zhou, Dongzhan, Wang, Guangshuai, Gao, Zhiqiang, Cao, Juntai, Qiu, Zijie, Hu, Ming, Ma, Chenglong, Tang, Shixiang, He, Junjun, Song, Chunfeng, He, Xuming, Zhang, Qiang, You, Chenyu, Zheng, Shuangjia, Ding, Ning, Ouyang, Wanli, Dong, Nanqing, Cheng, Yu, Sun, Siqi, Bai, Lei, Zhou, Bowen

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is reshaping scientific discovery, evolving from specialized computational tools into autonomous research partners. We position Agentic Science as a pivotal stage within the broader AI for Science paradigm, where AI systems progress from partial assistance to full scientific agency. Enabled by large language models (LLMs), multimodal systems, and integrated research platforms, agentic AI shows capabilities in hypothesis generation, experimental design, execution, analysis, and iterative refinement -- behaviors once regarded as uniquely human. This survey provides a domain-oriented review of autonomous scientific discovery across life sciences, chemistry, materials science, and physics. We unify three previously fragmented perspectives -- process-oriented, autonomy-oriented, and mechanism-oriented -- through a comprehensive framework that connects foundational capabilities, core processes, and domain-specific realizations. Building on this framework, we (i) trace the evolution of AI for Science, (ii) identify five core capabilities underpinning scientific agency, (iii) model discovery as a dynamic four-stage workflow, (iv) review applications across the above domains, and (v) synthesize key challenges and future opportunities. This work establishes a domain-oriented synthesis of autonomous scientific discovery and positions Agentic Science as a structured paradigm for advancing AI-driven research.