Goto

Collaborating Authors

abductive reasoning


Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery

#artificialintelligence

This paper revisits datasets and evaluation criteria for Symbolic Regression, a task of expressing given data using mathematical equations, specifically focused on its potential for scientific discovery. Focused on a set of formulas used in the existing datasets based on Feynman Lectures on Physics, we recreate 120 datasets to discuss the performance of symbolic regression for scientific discovery (SRSD). For each of the 120 SRSD datasets, we carefully review the properties of the formula and its variables to design reasonably realistic sampling range of values so that our new SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets. As an evaluation metric, we also propose to use normalized edit distances between a predicted equation and the ground-truth equation trees. While existing metrics are either binary or errors between the target values and an SR model's predicted values for a given input, normalized edit distances evaluate a sort of similarity between the ground-truth and predicted equation trees.


How Risk Aversion Is Killing the Spirit of Scientific Discovery

Mother Jones

The Allen Telescope Array, used by Northern California's SETI Institute in its often difficult-to-fund search for extraterrestrial life.Redding Record Searchlight / Zuma Press This story was originally published by Undark and is reproduced here as part of the Climate Desk collaboration. Science is built on the boldly curious exploration of the natural world. Astounding leaps of imagination and insight--coupled with a laser like focus on empiricism and experimentation--have brought forth countless wonders of insight into the workings of the universe we find ourselves in. But the culture that celebrates, supports, and rewards the audacious mental daring that is the hallmark of science is at risk of collapsing under a mountain of cautious, risk-averse, incurious advancement that seeks merely to win grants and peer approval. I've encountered this problem myself.


This new dataset shows that AI still lacks commonsense reasoning

#artificialintelligence

Abductive reasoning, frequently misidentified as deductive reasoning, is the process of making a plausible prediction when faced with incomplete information. For example, given a photo showing a toppled truck and a police cruiser on a snowy freeway, abductive reasoning may lead someone to infer that dangerous road conditions caused an accident. Humans can quickly consider this sort of context to arrive at a hypothesis. But AI struggles, despite recent technical advances. Motivated to explore the challenge, researchers at the Allen Institute for Artificial Intelligence, the University of California, Berkeley, and the MIT-IBM Watson AI lab created a dataset called Sherlock, a collection of over 100,000 images of scenes paired with clues a viewer could use to answer questions about the scenes.


Science and innovation relies on successful collaboration

#artificialintelligence

It may sound obvious, perhaps even clichéd, but this mantra is something that must be remembered in ongoing political negotiations over Horizon Europe, which could see Switzerland and the UK excluded from EU research projects. We need more, not fewer, researchers collaborating to solve today's and tomorrow's challenges. By closely working with Swiss and British researchers, who have long played key roles, Horizon Europe projects will benefit – as they have in the past. This is the motivation behind ETH Zurich, which collaborates with IBM Research on nanotechnology, leading the Stick to Science campaign. This calls on all three parties – Switzerland, the UK and the EU – to try and solve the current stalemate and put Swiss and British association agreements in place.


Prade

AAAI Conferences

Given a 4-tuple of Boolean variables (a, b, c, d), logical proportions are modeled by a pair of equivalences relating similarity indicators (a b and a b), or dissimilarity indicators (a b and a b) pertaining to the pair (a, b), to the ones associated with the pair (c, d). Logical proportions are homogeneous when they are based on equivalences between indicators of the same kind. There are only 4 such homogeneous proportions, which respectively express that i) "a differs from b as c differs from d" (and "b differs from a as d differs from c"), ii) "a differs from b as d differs from c" (and "b differs from a as c differs from d"), iii) "what a and b have in common c and d have it also", iv) "what a and b have in common neither c nor d have it". We prove that each of these proportions is the unique Boolean formula (up to equivalence) that satisfies groups of remarkable properties including a stability property w.r.t. a specific permutation of the terms of the proportion. The first one (i) is shown to be the only one to satisfy the standard postulates of an analogical proportion. The paper also studies how two analogical proportions can be combined into a new one. We then examine how homogeneous proportions can be used for diverse prediction tasks. We particularly focus on the completion of analogical-like series, and on missing value abduction problems. Finally, the paper compares our approach with other existing works on qualitative prediction based on ideas of betweenness, or of matrix abduction.


Sasaki

AAAI Conferences

Abduction is a form of inference that seeks the best explanation for the given observation. Because it provides a reasoning process based on background knowledge, it is used in applications that need convincing explanations. In this study, we consider weighted abduction, which is one of the commonly used mathematical models for abduction. The main difficulty associated with applying weighted abduction to real problems is its computational complexity. A state-of-the-art method formulates weighted abduction as an integer linear programming (ILP) problem and solves it using efficient ILP solvers; however, it is still limited to solving problems that include at most 100 rules of background knowledge and observations.


Abductive inference: The blind spot of artificial intelligence

#artificialintelligence

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence. Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain. But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larson's new book, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.


Interactive Model with Structural Loss for Language-based Abductive Reasoning

arXiv.org Artificial Intelligence

The abductive natural language inference task ($\alpha$NLI) is proposed to infer the most plausible explanation between the cause and the event. In the $\alpha$NLI task, two observations are given, and the most plausible hypothesis is asked to pick out from the candidates. Existing methods model the relation between each candidate hypothesis separately and penalize the inference network uniformly. In this paper, we argue that it is unnecessary to distinguish the reasoning abilities among correct hypotheses; and similarly, all wrong hypotheses contribute the same when explaining the reasons of the observations. Therefore, we propose to group instead of ranking the hypotheses and design a structural loss called ``joint softmax focal loss'' in this paper. Based on the observation that the hypotheses are generally semantically related, we have designed a novel interactive language model aiming at exploiting the rich interaction among competing hypotheses. We name this new model for $\alpha$NLI: Interactive Model with Structural Loss (IMSL). The experimental results show that our IMSL has achieved the highest performance on the RoBERTa-large pretrained model, with ACC and AUC results increased by about 1\% and 5\% respectively.


Revisiting C.S.Peirce's Experiment: 150 Years Later

arXiv.org Artificial Intelligence

An iconoclastic philosopher and polymath, Charles Sanders Peirce (1837-1914) is among the greatest of American minds. In 1872, Peirce conducted a series of experiments to determine the distribution of response times to an auditory stimulus, which is widely regarded as one of the most significant statistical investigations in the history of nineteenth-century American mathematical research (Stigler, 1978). On the 150th anniversary of this historic experiment, we look back at Peirce's view on empirical modeling through a modern statistical lens.