Goto

Collaborating Authors

 fermat


A 'Grand Unified Theory' of Math Just Got a Little Bit Closer

WIRED

The original version of this story appeared in Quanta Magazine. In 1994, an earthquake of a proof shook up the mathematical world. The mathematician Andrew Wiles had finally settled Fermat's Last Theorem, a central problem in number theory that had remained open for over three centuries. The proof didn't just enthral mathematicians--it made the front page of The New York Times. But to accomplish it, Wiles (with help from the mathematician Richard Taylor) first had to prove a more subtle intermediate statement--one with implications that extended beyond Fermat's puzzle.


Integer Factorisation, Fermat & Machine Learning on a Classical Computer

Blake, Sam

arXiv.org Artificial Intelligence

In this paper we describe a deep learning--based probabilistic algorithm for integer factorisation. We use Lawrence's extension of Fermat's factorisation algorithm to reduce the integer factorisation problem to a binary classification problem. To address the classification problem, based on the ease of generating large pseudo--random primes, a corpus of training data, as large as needed, is synthetically generated. We will introduce the algorithm, summarise some experiments, analyse where these experiments fall short, and finally put out a call to others to reproduce, verify and see if this approach can be improved to a point where it becomes a practical, scalable factorisation algorithm.


FERMAT: An Alternative to Accuracy for Numerical Reasoning

Sivakumar, Jasivan Alex, Moosavi, Nafise Sadat

arXiv.org Artificial Intelligence

While pre-trained language models achieve impressive performance on various NLP benchmarks, they still struggle with tasks that require numerical reasoning. Recent advances in improving numerical reasoning are mostly achieved using very large language models that contain billions of parameters and are not accessible to everyone. In addition, numerical reasoning is measured using a single score on existing datasets. As a result, we do not have a clear understanding of the strengths and shortcomings of existing models on different numerical reasoning aspects and therefore, potential ways to improve them apart from scaling them up. Inspired by CheckList (Ribeiro et al., 2020), we introduce a multi-view evaluation set for numerical reasoning in English, called FERMAT. Instead of reporting a single score on a whole dataset, FERMAT evaluates models on various key numerical reasoning aspects such as number understanding, mathematical operations, and training dependency. Apart from providing a comprehensive evaluation of models on different numerical reasoning aspects, FERMAT enables a systematic and automated generation of an arbitrarily large training or evaluation set for each aspect.The datasets and codes are publicly available to generate further multi-view data for ulterior tasks and languages.


ChatGPT: Optimizing Language Models for Dialogue

#artificialintelligence

We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response. We are excited to introduce ChatGPT to get users' feedback and learn about its strengths and weaknesses. During the research preview, usage of ChatGPT is free.


Are pseudoprimes hiding out among the composite reciprocals?

#artificialintelligence

""Mathematics is the queen of the sciences and number theory is the queen of mathematics." Number theory is a fascinating branch of pure mathematics. It is concerned with the properties of, and the relationships between, positive integers. The history of number theory is populated by some of the most famous mathematicians of all time -- Euclid, Carl Friedrich Gauss, Pierre de Fermat, Leonhard Euler, Joseph-Louis Lagrange, G.H. Hardy, John Littlewood, Srinivasa Ramanujan, and of course Bernhard Riemann -- as well as many modern-day mathematical superstars, such as Andrew Wiles, Terence Tao, Barry Mazur, Yitang Zhang, and James Maynard to name but a few. And number theory includes some of the most famous mathematical theorems and problems too, from Fermat's last theorem and the twin prime conjecture to Goldbach's conjecture and, the granddaddy of them all, the Riemann hypothesis. The common practice of number theorists is one of chalk dust and blackboards, theorems and proofs, which can seem a world apart from more empirical explorations of data science.


Fermat's Last Theorem

#artificialintelligence

Fermat's Last Theorem (FLT) states that there are no positive integers x, y, and z that satisfy the following Diophantine equation The French lawyer and mathematician Pierre de Fermat made this conjecture in 1637 in the margin of a copy of the book Arithmetica, an Ancient Greek mathematical text written by Diophantus of Alexandria in the 3rd century AD. Fermat famously conjectured he had a proof of Eq. 1, but it was too large to fit in the margin of the book. The English mathematician Andrew Wiles published the first successful proof of the conjecture in 1995, after more than 350 years of effort by some of the greatest mathematicians in history (see this link for more details). We will prove the particular case where n 4, which is the simplest one. However, before that, we need to prove the following simpler auxiliary theorem about Pythagorean triples (x, y, z).


From counting with stones to artificial intelligence: the story of calculus

#artificialintelligence

Isaac Newton (left) and Gottfried Wilhelm Leibniz each independently invented calculus.Credit: Left, DeAgostini/Getty; Right, Lombard/ullstein bild via Getty Midway through Infinite Powers, Steven Strogatz writes that Isaac Newton and Gottfried Wilhelm Leibniz both "died in excruciating pain while suffering from calculi -- a bladder stone for Newton, a kidney stone for Leibniz". It was a cruelly ironic end for the scientists who independently invented calculus: the word comes from the Latin for'small stone', in reference to pebbles once used for counting. Such fascinating anecdotes abound in Infinite Powers. Strogatz, a mathematician working in nonlinear dynamics and complex systems, has written a romp through the history of calculus -- the study of how things change. Starting with the ancient Greeks, the book ends with connections between the field and artificial intelligence and machine learning. Calculus was key to working with Newton's laws of motion, which stimulated the Industrial Revolution.


Fermat's Library Some Studies In Machine Learning Using the Game of Checkers annotated/explained version.

#artificialintelligence

This is his seminal paper originally published in 1959 where Samuel sets out to build a program that can learn to play the game of checkers. Checkers is an extremely complex game - as a matter of fact the game has roughly 500 billion billion possible positions - that using a brute force only approach to solve it is not satisfactory. Samuel's program was based on Claude Shannon's minimax strategy to find the best move from a given current position. In this paper he describes how a machine could look ahead "by evaluating the resulting board positions much as a human player might do".


Beautiful Number Theory Problem and Sandbox for Data Scientists

@machinelearnbot

The Waring conjecture - actually a problem associated with a number of conjectures, many now being solved - is one of the most fascinating mathematical problems. This article covers new aspects of this problem, with a generalization and new conjectures, some with a tentative solution, and a new framework to tackle the problem. Yet it is written in simple English and accessible to the layman. I also review a number of famous related mathematical conjectures, including one with a $1 million award still waiting for a solution, as well as Goldbach's conjecture, yet unproved as of today. Many curious properties of the Floor function are also listed, and the emphasis is on machine learning and efficient computer-intensive algorithms to try to find surprising results, which then need to be formally proved or disproved.


An "Infinitely Rich" Mathematician Turns 100 - Facts So Romantic

Nautilus

At the Hotel Parco dei Principi in Rome, in September of 1973, the Hungarian mathematician Paul Erd?s approached his friend Richard Guy with a request. He said, "Guy, veel you have a coffee?" It cost a dollar, a small fortune to a professor of mathematics at the hinterland University of Calgary who was not much of a coffee drinker. Yet, as Guy later recalled--during a memorial talk following Erd?s's death at age 83 two decades ago--he was curious why the great man had sought him out. Guy and Erd?s were in the Eternal City for an international colloquium on combinatorial theory, so Erd?s--who sustained himself with espresso and other stimulants, worked on math problems 19 hours a day, and in his lifetime published in excess of 1,500 papers with more than 500 collaborators--most likely had another problem on the go.