Goto

Collaborating Authors

Abductive Reasoning


Impact Biomedical Initiates Quantum, a New Frontier in Pharmaceutical Development

#artificialintelligence

Impact Biomedical, a wholly-owned subsidiary of SGX-listed Singapore eDevelopment, has announced the initiation of Quantum, a research program designed as a solution to the'patent cliff', the impending pharmaceutical threat. A patent cliff looms when patents for blockbuster drugs expire without being replaced with new drugs, and pharmaceutical companies experience an abrupt decrease in revenue, reducing overall pharmaceutical innovation globally, including crucial research into new methods to prevent and treat illnesses. Impact, through their strategic partner Global Research and Discovery Group Sciences (GRDG), has created a solution called Quantum, a new frontier in pharmaceutical development. Quantum is a new class of medicinal chemistry that uses advanced methods to boost efficacy and persistence of natural compounds and existing drugs while maintaining the safety profile of the original molecules. Instead of modifying functional groups, as is typically done presently in drug discovery, this new technique alters the behavior of molecules at the sub-molecular level.


Why are we afraid of sharks? There's a scientific explanation.

National Geographic

Sharks, especially great whites, were catapulted into the public eye with the release of the film Jaws in the summer of 1975. The film is the story of a massive great white that terrorizes a seaside community, and the image of the cover alone--the exposed jaws of a massive shark rising upward in murky water--is enough to inject fear into the hearts of would-be swimmers. Other thrillers have perpetuated the theme of sharks as villans. But where did our fear of sharks come from, and how far back does it go? We're going to need a bigger boat: Take a look at the design history of Jaws and its iconic cover https://t.co/dRdRPILF7L


Department of Energy plans major AI push to speed scientific discoveries

#artificialintelligence

A U.S. Department of Energy initiative could refurbish existing supercomputers, turning them into high-performance artificial intelligence machines. WASHINGTON, D.C.--The U.S. Department of Energy (DOE) is planning a major initiative to use artificial intelligence (AI) to speed up scientific discoveries. At a meeting here last week, DOE officials said they will likely ask Congress for between $3 billion and $4 billion over 10 years, roughly the amount the agency is spending to build next-generation "exascale" supercomputers. "That's a good starting point," says Earl Joseph, CEO of Hyperion Research, a high-performance computing analysis firm in St. Paul that tracks AI research funding. He notes, though, that DOE's planned spending is modest compared with the feverish investment in AI by China and industry.


Explosive Proofs of Mathematical Truths

arXiv.org Artificial Intelligence

Mathematical proofs are both paradigms of certainty and some of the most explicitly-justified arguments that we have in the cultural record. Their very explicitness, however, leads to a paradox, because their probability of error grows exponentially as the argument expands. Here we show that under a cognitively-plausible belief formation mechanism that combines deductive and abductive reasoning, mathematical arguments can undergo what we call an epistemic phase transition: a dramatic and rapidly-propagating jump from uncertainty to near-complete confidence at reasonable levels of claim-to-claim error rates. To show this, we analyze an unusual dataset of forty-eight machine-aided proofs from the formalized reasoning system Coq, including major theorems ranging from ancient to 21st Century mathematics, along with four hand-constructed cases from Euclid, Apollonius, Spinoza, and Andrew Wiles. Our results bear both on recent work in the history and philosophy of mathematics, and on a question, basic to cognitive science, of how we form beliefs, and justify them to others.


AlphaFold: Using AI for scientific discovery

#artificialintelligence

The recipes for those proteins--called genes--are encoded in our DNA. An error in the genetic recipe may result in a malformed protein, which could result in disease or death for an organism. Many diseases, therefore, are fundamentally linked to proteins. But just because you know the genetic recipe for a protein doesn't mean you automatically know its shape. Proteins are comprised of chains of amino acids (also referred to as amino acid residues).


These are the top 20 scientific discoveries of the decade

#artificialintelligence

To understand the natural world, scientists must measure it--but how do we define our units? Over the decades, scientists have gradually redefined classic units in terms of universal constants, such as using the speed of light to help define the length of a meter. But the scientific unit of mass, the kilogram, remained pegged to "Le Grand K," a metallic cylinder stored at a facility in France. If that ingot's mass varied for whatever reason, scientists would have to recalibrate their instruments. No more: In 2019, scientists agreed to adopt a new kilogram definition based on a fundamental factor in physics called Planck's constant and the improved definitions for the units of electrical current, temperature, and the number of particles in a given substance.


AutoDiscovery-Exploring Complex Relationships for Scientific Discovery

#artificialintelligence

More detailed analysis would follow from initial discoveries of interesting and significant parameter correlations within complex high-dimensional data. An article was recently published in Nature on "Statistical Errors – p Values, the Gold Standard of Statistical Validity, Are Not as Reliable as Many Scientists Assume" (by Regina Nuzzo, Nature, 506, 150-152, 2014). In this article, Columbia University statistician Andrew Gelman states that instead of doing multiple separate small studies, "researchers would first do small exploratory studies and gather potentially interesting findings without worrying too much about false alarms. Then, on the basis of these results, the authors would decide exactly how they planned to confirm the findings." In other words, a disciplined scientific methodology that includes both exploratory and confirmatory analyses can be documented within an open science framework (e.g., https://osf.io)


Pathology - Pixel Scientia Labs - Quantifying Images For Scientific Discovery

#artificialintelligence

Recent improvements in whole slide scanning systems, GPU computing, and deep learning make automated slide analysis well-equipped to solve new and challenging analysis tasks. These learning methods are trained on labeled data. This could be anything from annotating many examples of mitosis, labeling tissue types, or categorizing a full slide or set of slides from a particular patient sample. The goal is then to learning a mapping from the input images to the desired output on training data. Then the same model can be applied to unseen data.


Empowering Innovation & Scientific Discoveries

#artificialintelligence

CAS drives Increase in R&D productivity with launch of breakthrough Retrosynthesis Planner in SciFinder-n. This unites advanced technology with CAS's unmatched collection of chemical reaction data to increase efficiency of chemical synthesis planning.


Reasoning-Driven Question-Answering for Natural Language Understanding

arXiv.org Artificial Intelligence

Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received significant attention throughout the history of NLP research. This primary goal has been studied under different tasks, such as Question Answering (QA) and Textual Entailment (TE). In this thesis, we investigate the NLU problem through the QA task and focus on the aspects that make it a challenge for the current state-of-the-art technology. This thesis is organized into three main parts: In the first part, we explore multiple formalisms to improve existing machine comprehension systems. We propose a formulation for abductive reasoning in natural language and show its effectiveness, especially in domains with limited training data. Additionally, to help reasoning systems cope with irrelevant or redundant information, we create a supervised approach to learn and detect the essential terms in questions. In the second part, we propose two new challenge datasets. In particular, we create two datasets of natural language questions where (i) the first one requires reasoning over multiple sentences; (ii) the second one requires temporal common sense reasoning. We hope that the two proposed datasets will motivate the field to address more complex problems. In the final part, we present the first formal framework for multi-step reasoning algorithms, in the presence of a few important properties of language use, such as incompleteness, ambiguity, etc. We apply this framework to prove fundamental limitations for reasoning algorithms. These theoretical results provide extra intuition into the existing empirical evidence in the field.