Goto

Collaborating Authors

 Estonia


How Pokémon Go is giving delivery robots an inch-perfect view of the world

MIT Technology Review

Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players. Pokémon Go was the world's first augmented-reality megahit. Released in 2016 by the Google spinout Niantic, the AR twist on the juggernaut Pokémon franchise fast became a global phenomenon. From Chicago to Oslo to Enoshima, players hit the streets in the urgent hope of catching a Jigglypuff or a Squirtle or (with a huge amount of luck) an ultra-rare Galarian Zapdos hovering just out of reach, superimposed on the everyday world. "Five hundred million people installed that app in 60 days," says Brian McClendon, CTO at Niantic Spatial, an AI company that Niantic spun out in May last year. According to the video-game firm Scopely, which bought Pokémon Go from Niantic at the same time, the game still drew more than 100 million players in 2024, eight years after it launched.


Russia-Ukraine war: List of key events, day 1,456

Al Jazeera

How the US left Ukraine exposed to Russia's winter war Will Europe use frozen Russian assets to fund war? How can Ukraine rebuild China ties? Russian forces launched multiple attacks on Ukraine's Zaporizhia region, killing one person and injuring seven others over the past day, the region's military administration said on the Telegram messaging platform. The attacks involved 448 drones as well as 163 artillery strikes, causing damage to 136 homes, cars and other structures, the military administration said. Russian forces also continued shelling Ukraine's Donetsk region, forcing 173 people, including 135 children, to evacuate front-line areas over the past day, regional governor Vadym Filashkin said on Telegram.





ReST-MCTS: LLM Self-Training via Process Reward Guided Tree Search Dan Zhang

Neural Information Processing Systems

Recent methodologies in LLM self-training mostly rely on LLM generating responses and filtering those with correct output answers as training data. This approach often yields a low-quality fine-tuning training set (e.g., incorrect plans or intermediate reasoning).