scientific discovery
AI for Science – from cosmology to chemistry
On the 31st March, our editorial team headed to the Royal Society for AI for Science . This day-long conference explored how AI is changing the nature of scientific discovery, and was hosted by the Fundamental Research team from the Alan Turing Institute. Nestled in a terrace of 19th century townhouses along the banks of the Thames, the Royal Society looks as grand as the names who have passed through its doors throughout the years. Prof Jason McEwen, Chief Scientist for the Turing Institute, opened the event with an insightful talk on the nature of scientific revolution, and how the bidirectional relationship between AI and science could spark the next one. Then, Prof Anna Scaife from the University of Manchester spoke on the use of foundation models for astronomical discovery.
- Europe > United Kingdom (0.15)
- Africa (0.15)
Resource-sharing boosts robotic resilience
If the goal of a robot is to perform a function, then minimizing the possibility of failure is a top priority when it comes to robotic design. But this minimization is at odds with the robotic raison d'être: systems with multiple units, or agents, can perform more diverse functions, but they also have more different parts that can potentially fail. Researchers led by Jamie Paik, head of the Reconfigurable Robotics Laboratory ( RRL) in EPFL's School of Engineering, have not only circumvented this problem, but flipped it: they have designed a modular robot that actually lowers its odds of failure by sharing resources among its individual agents. "For the first time, we have found a way to reverse the trend of increasing odds of failure with increasing function," Paik explains. "We introduce local resource sharing as a new paradigm in robotics, reducing the failure rate with a larger number of modules."
What I've learned from 25 years of automated science, and what the future holds: an interview with Ross King
What I've learned from 25 years of automated science, and what the future holds: an interview with Ross King We're excited to launch our new series, where we're speaking with leading researchers to explore the breakthroughs driving AI and the reality of the future promises - to give you an inside perspective on the headlines. Our first interviewee is Ross King, who created the first robot scientist back in 2009. He spoke to us about the nature of scientific discovery, the role AI has to play, and his recent work in DNA computing. Automated science is a really exciting area, and it feels like everyone's talking about it at the moment - e.g. But you've been working in this field for many years now. In 2009 you developed Adam, the first robot scientist to generate novel scientific knowledge. Could you tell me some more about that? So the history goes back to before Adam.
- North America > United States > Texas (0.04)
- Europe > United Kingdom > Wales > Ceredigion > Aberystwyth (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Sweden (0.04)
- Health & Medicine (0.95)
- Energy (0.68)
Concept frustration: Aligning human concepts and machine representations
Parisini, Enrico, Soelistyo, Christopher J., Isaac, Ahab, Barp, Alessandro, Banerji, Christopher R. S.
Aligning human-interpretable concepts with the internal representations learned by modern machine learning systems remains a central challenge for interpretable AI. We introduce a geometric framework for comparing supervised human concepts with unsupervised intermediate representations extracted from foundation model embeddings. Motivated by the role of conceptual leaps in scientific discovery, we formalise the notion of concept frustration: a contradiction that arises when an unobserved concept induces relationships between known concepts that cannot be made consistent within an existing ontology. We develop task-aligned similarity measures that detect concept frustration between supervised concept-based models and unsupervised representations derived from foundation models, and show that the phenomenon is detectable in task-aligned geometry while conventional Euclidean comparisons fail. Under a linear-Gaussian generative model we derive a closed-form expression for Bayes-optimal concept-based classifier accuracy, decomposing predictive signal into known-known, known-unknown and unknown-unknown contributions and identifying analytically where frustration affects performance. Experiments on synthetic data and real language and vision tasks demonstrate that frustration can be detected in foundation model representations and that incorporating a frustrating concept into an interpretable model reorganises the geometry of learned concept representations, to better align human and machine reasoning. These results suggest a principled framework for diagnosing incomplete concept ontologies and aligning human and machine conceptual reasoning, with implications for the development and validation of safe interpretable AI for high-risk applications.
- Europe > United Kingdom (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Information Technology > Security & Privacy (0.67)
- Law (0.46)
- Leisure & Entertainment > Games > Computer Games (0.46)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.31)
Roadkill is a surprising and untapped source for scientists
Millions of animals unfortunately die on roads each year, but the casualties hold important data. Breakthroughs, discoveries, and DIY tips sent six days a week. As much as people try to avoid it (and not contribute to it), the untimely animal deaths are an unfortunate, inevitable byproduct of a society reliant on cars. In Brazil alone, it's estimated that anywhere between two and eight million birds and mammals are killed on roadways every year. In Europe, the potential tally may climb as high as 194 million .
Expert-level protocol translation for self-driving labs
Recent development in Artificial Intelligence (AI) models has propelled their application in scientific discovery, but the validation and exploration of these discoveries require subsequent empirical experimentation. The concept of self-driving laboratories promises to automate and thus boost the experimental process following AI-driven discoveries. However, the transition of experimental protocols, originally crafted for human comprehension, into formats interpretable by machines presents significant challenges, which, within the context of specific expert domain, encompass the necessity for structured as opposed to natural language, the imperative for explicit rather than tacit knowledge, and the preservation of causality and consistency throughout protocol steps. Presently, the task of protocol translation predominantly requires the manual and labor-intensive involvement of domain experts and information technology specialists, rendering the process time-intensive. To address these issues, we propose a framework that automates the protocol translation process through a three-stage workflow, which incrementally constructs Protocol Dependence Graphs (PDGs) that approach structured on the syntax level, completed on the semantics level, and linked on the execution level. Quantitative and qualitative evaluations have demonstrated its performance at par with that of human experts, underscoring its potential to significantly expedite and democratize the process of scientific discovery by elevating the automation capabilities within self-driving laboratories.
DiscoveryWorld: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents
Automated scientific discovery promises to accelerate progress across scientific domains, but evaluating an agent's capacity for end-to-end scientific reasoning is challenging as running real-world experiments is often prohibitively expensive or infeasible. In this work we introduce DiscoveryWorld, a virtual environment that enables benchmarking an agent's ability to perform complete cycles of novel scientific discovery in an inexpensive, simulated, multi-modal, long-horizon, and fictional setting.DiscoveryWorld consists of 24 scientific tasks across three levels of difficulty, each with parametric variations that provide new discoveries for agents to make across runs. Tasks require an agent to form hypotheses, design and run experiments, analyze results, and act on conclusions. Task difficulties are normed to range from straightforward to challenging for human scientists with advanced degrees. DiscoveryWorld further provides three automatic metrics for evaluating performance, including: (1) binary task completion, (2) fine-grained report cards detailing procedural scoring of task-relevant actions, and (3) the accuracy of discovered explanatory knowledge.While simulated environments such as DiscoveryWorld are low-fidelity compared to the real world, we find that strong baseline agents struggle on most DiscoveryWorld tasks, highlighting the utility of using simulated environments as proxy tasks for near-term development of scientific discovery competency in agents.
Virtual Collaboration
The holy grail for scientists is to focus on their research to enhance and produce scientific discoveries while offloading time-consuming tasks. So-called artificial intelligence (AI) co-scientists are helping to make this possible. These collaborative AI systems are designed to assist human researchers by accelerating scientific discovery, enhancing collaboration, analyzing data, and going beyond human intuition. An AI co-scientist performs various scientific tasks, especially in areas like hypothesis generation, experimental design, verification, and literature review. It uses the results to learn to improve its ability to generate and refine hypotheses.
- Research Report (1.00)
- Personal > Honors (0.35)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.66)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.61)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.61)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.40)
AI-Newton: A Concept-Driven Physical Law Discovery System without Prior Physical Knowledge
Fang, You-Le, Jian, Dong-Shan, Li, Xiang, Ma, Yan-Qing
Advances in artificial intelligence (AI) have made AI-driven scientific discovery a highly promising new paradigm [1]. Although AI has achieved remarkable results in tackling domain-specific challenges [2, 3], the ultimate aspiration from a paradigm-shifting perspective still lies in developing reliable AI systems capable of autonomous scientific discovery directly from a large collection of raw data without supervision [4, 5]. Current approaches to automated physics discovery focus on individual experiments, employing either neural network (NN)-based methods [6-25] or symbolic techniques [26-33]. By analyzing data from a single experiment, these methods can construct a specific model capable of predicting future data from the same experiment; if sufficiently simple, such a model may even be expressed in symbolic form [34-36]. Although these methods represent a crucial and successful stage towards automated scientific discovery, they have not yet reached a discovery capacity comparable to that of human physicists.