Goto

Collaborating Authors

 arithmetic


How we discovered the speed limit of arithmetic – and broke it

New Scientist

Some seemingly simple sequences of multiplication and addition grow so quickly that they question the very foundations of mathematics. Did you hear the one about the man who invented chess and got himself executed? Legend has it that a man called Sessa, who lived in India long ago, developed the rules for the game and presented them to a king. The king was delighted and offered the man his pick of reward. Sessa asked for a supposedly humble quantity of rice.


The man who ruined mathematics

New Scientist

Gödel's seminal work directly contradicted one of the great minds of mathematics and limited the field forever Kurt Gödel, the man who ruined mathematics, was one of the most important thinkers of the 20th century. He was born in 1906, smack-bang in the middle of the greatest crisis that maths has ever known. Just a few decades later, he would help resolve this turmoil, but in doing so doom mathematicians to a smaller world than the one that came before. Mathematics, as an intellectual framework, is incredibly powerful. The entire point is taking one set of logical ideas and using them to build another, making maths the closest thing we have to a cognitive perpetual-motion machine - there is always a new mathematical idea lurking across the horizon, and we just need to assemble the steps to get there.







Appendix: Representing Hyperbolic Space Accurately using Multi-Component Floats

Neural Information Processing Systems

Renormalize algorithm to reduce the number of components.Algorithm 4: Scale-Expansion, modified from [4] Input: m-components expansion (a More importantly, we show in Alg. At the start of the training, we train models with an initial "burn-in" phase We mention an interesting tuning result here, take the training of the halfspace model over the WordNet Mammal for example, we varies the learning rates for different batchsize as shown in Table. 1. We found that, if trained with a larger batchsize, when the learning rate is adjusted (increased) properly, the embedding performance of the converged model with a large batchsize can nearly match the best performance of the converged model with a smaller batchsize.



STaR: Self-TaughtReasoner BootstrappingReasoningWithReasoning

Neural Information Processing Systems

For example, [5] demonstrated that LLMs explicitly trained to use "scratchpads" for intermediate steps can attain perfect in-distribution performance on arithmetic, and strong out-of-distribution generalization, while models trained topredict answers directly fail to do either.