Abductive Reasoning


These are the top 20 scientific discoveries of the decade

#artificialintelligence

To understand the natural world, scientists must measure it--but how do we define our units? Over the decades, scientists have gradually redefined classic units in terms of universal constants, such as using the speed of light to help define the length of a meter. But the scientific unit of mass, the kilogram, remained pegged to "Le Grand K," a metallic cylinder stored at a facility in France. If that ingot's mass varied for whatever reason, scientists would have to recalibrate their instruments. No more: In 2019, scientists agreed to adopt a new kilogram definition based on a fundamental factor in physics called Planck's constant and the improved definitions for the units of electrical current, temperature, and the number of particles in a given substance.


AutoDiscovery-Exploring Complex Relationships for Scientific Discovery

#artificialintelligence

More detailed analysis would follow from initial discoveries of interesting and significant parameter correlations within complex high-dimensional data. An article was recently published in Nature on "Statistical Errors – p Values, the Gold Standard of Statistical Validity, Are Not as Reliable as Many Scientists Assume" (by Regina Nuzzo, Nature, 506, 150-152, 2014). In this article, Columbia University statistician Andrew Gelman states that instead of doing multiple separate small studies, "researchers would first do small exploratory studies and gather potentially interesting findings without worrying too much about false alarms. Then, on the basis of these results, the authors would decide exactly how they planned to confirm the findings." In other words, a disciplined scientific methodology that includes both exploratory and confirmatory analyses can be documented within an open science framework (e.g., https://osf.io)


Pathology - Pixel Scientia Labs - Quantifying Images For Scientific Discovery

#artificialintelligence

Recent improvements in whole slide scanning systems, GPU computing, and deep learning make automated slide analysis well-equipped to solve new and challenging analysis tasks. These learning methods are trained on labeled data. This could be anything from annotating many examples of mitosis, labeling tissue types, or categorizing a full slide or set of slides from a particular patient sample. The goal is then to learning a mapping from the input images to the desired output on training data. Then the same model can be applied to unseen data.


Empowering Innovation & Scientific Discoveries

#artificialintelligence

CAS drives Increase in R&D productivity with launch of breakthrough Retrosynthesis Planner in SciFinder-n. This unites advanced technology with CAS's unmatched collection of chemical reaction data to increase efficiency of chemical synthesis planning.


Can we trust scientific discoveries made using machine learning?

#artificialintelligence

Allen, associate professor of statistics, computer science and electrical and computer engineering at Rice and of pediatrics-neurology at Baylor College of Medicine, will address the topic in both a press briefing and a general session today at the 2019 Annual Meeting of the American Association for the Advancement of Science (AAAS). "The question is, 'Can we really trust the discoveries that are currently being made using machine-learning techniques applied to large data sets?'" "The answer in many situations is probably, 'Not without checking,' but work is underway on next-generation machine-learning systems that will assess the uncertainty and reproducibility of their predictions." Machine learning (ML) is a branch of statistics and computer science concerned with building computational systems that learn from data rather than following explicit instructions. Allen said much attention in the ML field has focused on developing predictive models that allow ML to make predictions about future data based on its understanding of data it has studied. "A lot of these techniques are designed to always make a prediction," she said.


Can we trust scientific discoveries made using machine learning?

#artificialintelligence

Rice University statistician Genevera Allen says scientists must keep questioning the accuracy and reproducibility of scientific discoveries made by machine-learning techniques until researchers develop new computational systems that can critique themselves. Allen, associate professor of statistics, computer science and electrical and computer engineering at Rice and of pediatrics-neurology at Baylor College of Medicine, will address the topic in both a press briefing and a general session today at the 2019 Annual Meeting of the American Association for the Advancement of Science (AAAS). "The question is, 'Can we really trust the discoveries that are currently being made using machine-learning techniques applied to large data sets?'" "The answer in many situations is probably, 'Not without checking,' but work is underway on next-generation machine-learning systems that will assess the uncertainty and reproducibility of their predictions." Machine learning (ML) is a branch of statistics and computer science concerned with building computational systems that learn from data rather than following explicit instructions. Allen said much attention in the ML field has focused on developing predictive models that allow ML to make predictions about future data based on its understanding of data it has studied.


ABox Abduction via Forgetting in ALC (Long Version)

arXiv.org Artificial Intelligence

Abductive reasoning generates explanatory hypotheses for new observations using prior knowledge. This paper investigates the use of forgetting, also known as uniform interpolation, to perform ABox abduction in description logic (ALC) ontologies. Non-abducibles are specified by a forgetting signature which can contain concept, but not role, symbols. The resulting hypotheses are semantically minimal and each consist of a set of disjuncts. These disjuncts are each independent explanations, and are not redundant with respect to the background ontology or the other disjuncts, representing a form of hypothesis space. The observations and hypotheses handled by the method can contain both atomic or complex ALC concepts, excluding role assertions, and are not restricted to Horn clauses. Two approaches to redundancy elimination are explored for practical use: full and approximate. Using a prototype implementation, experiments were performed over a corpus of real world ontologies to investigate the practicality of both approaches across several settings.


Ford gives scientific explanation for her memory of alleged Kavanaugh incident

FOX News

Dr. Christine Blasey Ford responds to a question from Sen. Dianne Feinstein during testimony before the Senate Judiciary Committee on her sexual assault allegations against Supreme Court nominee Brett Kavanaugh. Christine Blasey Ford gave a detailed scientific explanation for her memory of the alleged incident involving Supreme Court nominee Judge Brett Kavanaugh at her highly anticipated Senate testimony Thursday. Senate Judiciary Committee Ranking Member Dianne Feinstein, D-Calif., pressed Ford over her level of certainty that it was, in fact, Kavanaugh who allegedly pinned her down 36 years ago, while in high school, and attempted to remove her clothing. "How are you so sure that it was he?" Feinstein asked. Ford, a California-based psychology professor, laid out a detailed scientific explanation.


AI for code encourages collaborative, open scientific discovery

#artificialintelligence

We have seen significant recent progress in pattern analysis and machine intelligence applied to images, audio and video signals, and natural language text, but not as much applied to another artifact produced by people: computer program source code. In a paper to be presented at the FEED Workshop at KDD 2018, we showcase a system that makes progress towards the semantic analysis of code. By doing so, we provide the foundation for machines to truly reason about program code and learn from it. The work, also recently demonstrated at IJCAI 2018, is conceived and led by IBM Science for Social Good fellow Evan Patterson and focuses specifically on data science software. Data science programs are a special kind of computer code, often fairly short, but full of semantically rich content that specifies a sequence of data transformation, analysis, modeling, and interpretation operations.


Abe says he's willing to talk directly with Pyongyang to resolve abduction issue

The Japan Times

WASHINGTON – Japanese Prime Minister Shinzo Abe said Thursday he is willing to talk directly with North Korea in a bid to resolve the festering issue of abductions of Japanese citizens and foster better ties with Pyongyang. "I wish to directly face North Korea and talk with them so that the abduction problem can be resolved quickly," Abe said at a joint press conference with President Donald Trump. The U.S. leader promised to raise the highly sensitive issue of the Japanese nationals kidnapped by Pyongyang in the 1970s and 1980s with Kim Jong Un at next week's high-stakes summit in Singapore. Abe added there was no change in Japan's policy to pursue "real peace in Northeast Asia" and that if North Korea "is willing to take steps" in the right direction, it will have a "bright future."