continuum hypothesis


Researchers introduce ML model where learnability cannot be proved Packt Hub

#artificialintelligence

In a study published in Nature Machine Intelligence, researchers discovered that in some cases of machine learning it cannot be proved whether the system actually'learned' something or solved the problem. They explore machine learning learnability. We already know that machine learning systems, and AI systems in general are black boxes. You feed the system some data, you get some output or a trained system that performs some tasks but you don't know how the system arrived at a particular solution. Now we have a published study from Ben-Davis et al that shows learnability in machine learning is undecidable.


Mathematicians Have Developed a Computing Problem That AI Can Never Solve

#artificialintelligence

In a world where it seems like artificial intelligence and machine learning can figure out just about anything, that might seem like heresy – but it's true. At least, that's the case according to a new international study by a team of mathematicians and AI researchers, who discovered that despite the seemingly boundless potential of machine learning, even the cleverest algorithms are nonetheless bound by the constraints of mathematics. "The advantages of mathematics, however, sometimes come with a cost… in a nutshell… not everything is provable," the researchers, led by first author and computer scientist Shai Ben-David from the University of Waterloo, write in their paper. "Here we show that machine learning shares this fate." Awareness of these mathematical limitations is often tied to the famous Austrian mathematician Kurt Gödel, who developed in the 1930s what are known as the incompleteness theorems – two propositions suggesting that not all mathematical questions can actually be solved.


Mathematicians discovered a computer problem that no one can ever solve

#artificialintelligence

Mathematicians have discovered a problem they cannot solve. It's not that they're not smart enough; there simply is no answer. The problem has to do with machine learning -- the type of artificial-intelligence models some computers use to "learn" how to do a specific task. When Facebook or Google recognizes a photo of you and suggests that you tag yourself, it's using machine learning. Neuroscientists use machine learning to "read" someone's thoughts.


Machine learning leads mathematicians to unsolvable problem

#artificialintelligence

Austrian mathematician Kurt Gödel is known for his'incompleteness' theorems.Credit: Alfred Eisenstaedt/ LIFE Picture Coll./Getty A team of researchers has stumbled on a question that is mathematically unanswerable because it is linked to logical paradoxes discovered by Austrian mathematician Kurt Gödel in the 1930s that can't be solved using standard mathematics. The mathematicians, who were working on a machine-learning problem, show that the question of'learnability' -- whether an algorithm can extract a pattern from limited data -- is linked to a paradox known as the continuum hypothesis. Gödel showed that the statement cannot be proved either true or false using standard mathematical language. The latest result appeared on 7 January in Nature Machine Intelligence1.


Unprovability comes to machine learning

#artificialintelligence

During the twentieth century, discoveries in mathematical logic revolutionized our understanding of the very foundations of mathematics. In 1931, the logician Kurt Gödel showed that, in any system of axioms that is expressive enough to model arithmetic, some true statements will be unprovable1. And in the following decades, it was demonstrated that the continuum hypothesis -- which states that no set of distinct objects has a size larger than that of the integers but smaller than that of the real numbers -- can be neither proved nor refuted using the standard axioms of mathematics2–4. They identify a machine-learning problem whose fate depends on the continuum hypothesis, leaving its resolution forever beyond reach. Machine learning is concerned with the design and analysis of algorithms that can learn and improve their performance as they are exposed to data.