Goto

Collaborating Authors

 Helsinki


Can quantum computers now solve health care problems? We'll soon find out.

MIT Technology Review

I'm standing in front of a quantum computer built out of atoms and light at the UK's National Quantum Computing Centre on the outskirts of Oxford. On a laboratory table, a complex matrix of mirrors and lenses surrounds a Rubik's Cube-size cell where 100 cesium atoms are suspended in grid formation by a carefully manipulated laser beam. The cesium atom setup is so compact that I could pick it up, carry it out of the lab, and put it on the backseat of my car to take home. I'd be unlikely to get very far, though.


How Pokémon Go is giving delivery robots an inch-perfect view of the world

MIT Technology Review

Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players. Pokémon Go was the world's first augmented-reality megahit. Released in 2016 by the Google spinout Niantic, the AR twist on the juggernaut Pokémon franchise fast became a global phenomenon. From Chicago to Oslo to Enoshima, players hit the streets in the urgent hope of catching a Jigglypuff or a Squirtle or (with a huge amount of luck) an ultra-rare Galarian Zapdos hovering just out of reach, superimposed on the everyday world. "Five hundred million people installed that app in 60 days," says Brian McClendon, CTO at Niantic Spatial, an AI company that Niantic spun out in May last year. According to the video-game firm Scopely, which bought Pokémon Go from Niantic at the same time, the game still drew more than 100 million players in 2024, eight years after it launched.


7608de7a475c0c878f60960d72a92654-Paper.pdf

Neural Information Processing Systems

Introspection reveals that our meta learned LAs learn through fast association in a way that is qualitatively different from gradientdescent.



GV-Rep: A Large-Scale Dataset for Genetic Variant Representation Learning

Neural Information Processing Systems

The development of deep learning approaches for modeling these multifactorial effects of GVs is still in its nascent stages, primarily due to the lack of comprehensive datasets that capture the intricate relationships between GVs and their downstream effects on complex traits.





Navigating Extremes: Dynamic Sparsity in Large Output Spaces

Neural Information Processing Systems

In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to post-training pruning for generating efficient models. In principle, DST allows for a more memory efficient training process, as it maintains sparsity throughout the entire training run. However, current DST implementations fail to capitalize on this in practice. Because sparse matrix multiplication is much less efficient than dense matrix multiplication on GPUs, most implementations simulate sparsity by masking weights.