tetris
TETRIS: TilE-matching the TRemendous Irregular Sparsity
Compressing neural networks by pruning weights with small magnitudes can significantly reduce the computation and storage cost. Although pruning makes the model smaller, it is difficult to get practical speedup in modern computing platforms such as CPU and GPU due to the irregularity. Structural pruning has attract a lot of research interest to make sparsity hardware-friendly. Increasing the sparsity granularity can lead to better hardware utilization, but it will compromise the sparsity for maintaining accuracy. In this work, we propose a novel method, TETRIS, to achieve both better hardware utilization and higher sparsity. Just like a tile-matching game, we cluster the irregularly distributed weights with small value into structured groups by reordering the input/output dimension and structurally prune them. Results show that it can achieve comparable sparsity with the irregular element-wise pruning and demonstrate negligible accuracy loss. The experiments also shows ideal speedup, which is proportional to the sparsity, on GPU platforms. Our proposed method provides a new solution toward algorithm and architecture co-optimization for accuracy-efficiency trade-off.
TETRIS: TilE-matching the TRemendous Irregular Sparsity
Compressing neural networks by pruning weights with small magnitudes can significantly reduce the computation and storage cost. Although pruning makes the model smaller, it is difficult to get practical speedup in modern computing platforms such as CPU and GPU due to the irregularity. Structural pruning has attract a lot of research interest to make sparsity hardware-friendly. Increasing the sparsity granularity can lead to better hardware utilization, but it will compromise the sparsity for maintaining accuracy. In this work, we propose a novel method, TETRIS, to achieve both better hardware utilization and higher sparsity. Just like a tile-matching game, we cluster the irregularly distributed weights with small value into structured groups by reordering the input/output dimension and structurally prune them. Results show that it can achieve comparable sparsity with the irregular element-wise pruning and demonstrate negligible accuracy loss. The experiments also shows ideal speedup, which is proportional to the sparsity, on GPU platforms. Our proposed method provides a new solution toward algorithm and architecture co-optimization for accuracy-efficiency trade-off.
Approximate Dynamic Programming Finally Performs Well in the Game of Tetris
Tetris is a popular video game that has been widely used as a benchmark for various optimization techniques including approximate dynamic programming (ADP) algorithms. A close look at the literature of this game shows that while ADP algorithms, that have been (almost) entirely based on approximating the value function (value function based), have performed poorly in Tetris, the methods that search directly in the space of policies by learning the policy parameters using an optimization black box, such as the cross entropy (CE) method, have achieved the best reported results. This makes us conjecture that Tetris is a game in which good policies are easier to represent, and thus, learn than their corresponding value functions. So, in order to obtain a good performance with ADP, we should use ADP algorithms that search in a policy space, instead of the more traditional ones that search in a value function space. In this paper, we put our conjecture to test by applying such an ADP algorithm, called classification-based modified policy iteration (CBMPI), to the game of Tetris. Our extensive experimental results show that for the first time an ADP algorithm, namely CBMPI, obtains the best results reported in the literature for Tetris in both small $10\times 10$ and large $10\times 20$ boards. Although the CBMPI's results are similar to those achieved by the CE method in the large board, CBMPI uses considerably fewer (almost 1/10) samples (call to the generative model of the game) than CE.
Improving Accuracy and Efficiency of Implicit Neural Representations: Making SIREN a WINNER
Chandravamsi, Hemanth, Shenoy, Dhanush V., Frankel, Steven H.
We identify and address a fundamental limitation of sinusoidal representation networks (SIRENs), a class of implicit neural representations. SIRENs Sitzmann et al. (2020), when not initialized appropriately, can struggle at fitting signals that fall outside their frequency support. In extreme cases, when the network's frequency support misaligns with the target spectrum, a 'spectral bottleneck' phenomenon is observed, where the model yields to a near-zero output and fails to recover even the frequency components that are within its representational capacity. To overcome this, we propose WINNER - Weight Initialization with Noise for Neural Representations. WINNER perturbs uniformly initialized weights of base SIREN with Gaussian noise - whose noise scales are adaptively determined by the spectral centroid of the target signal. Similar to random Fourier embeddings, this mitigates 'spectral bias' but without introducing additional trainable parameters. Our method achieves state-of-the-art audio fitting and significant gains in image and 3D shape fitting tasks over base SIREN. Beyond signal fitting, WINNER suggests new avenues in adaptive, target-aware initialization strategies for optimizing deep neural network training. For code and data visit cfdlabtechnion.github.io/siren_square/.
lmgame-Bench: How Good are LLMs at Playing Games?
Hu, Lanxiang, Huo, Mingjia, Zhang, Yuxuan, Yu, Haoyang, Xing, Eric P., Stoica, Ion, Rosing, Tajana, Jin, Haojian, Zhang, Hao
Playing video games requires perception, memory, and planning, exactly the faculties modern large language model (LLM) agents are expected to master. We study the major challenges in using popular video games to evaluate modern LLMs and find that directly dropping LLMs into games cannot make an effective evaluation, for three reasons -- brittle vision perception, prompt sensitivity, and potential data contamination. We introduce lmgame-Bench to turn games into reliable evaluations. lmgame-Bench features a suite of platformer, puzzle, and narrative games delivered through a unified Gym-style API and paired with lightweight perception and memory scaffolds, and is designed to stabilize prompt variance and remove contamination. Across 13 leading models, we show lmgame-Bench is challenging while still separating models well. Correlation analysis shows that every game probes a unique blend of capabilities often tested in isolation elsewhere. More interestingly, performing reinforcement learning on a single game from lmgame-Bench transfers both to unseen games and to external planning tasks. Our evaluation code is available at https://github.com/lmgame-org/GamingAgent/lmgame-bench.
'It was just the perfect game': Henk Rogers on buying Tetris and foiling the KGB
When game designer and entrepreneur Henk Rogers first encountered Tetris at the 1988 Las Vegas Consumer Electronics Show, he immediately knew it was special. "It was just the perfect game," he recalls. "It looked so simple, so rudimentary, but I wanted to play it again and again and again … There was no other game demo that ever did that to me." Rogers is now co-owner of the Tetris Company, which manages and licenses the Tetris brand. Over the past 30 years, he has become almost as famous as the game itself. The escapades surrounding his deal to buy its distribution rights from Russian agency Elektronorgtechnica (Elorg) were dramatised in an Apple TV film starring Taron Egerton.
- Asia > Russia (0.73)
- North America > United States > Nevada > Clark County > Las Vegas (0.25)
- Asia > Japan (0.06)
- (2 more...)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Government > Regional Government > Europe Government > Russia Government (0.35)
- Government > Regional Government > Asia Government > Russia Government (0.35)
Drop Duchy review – a sprawling challenge disguised as a block-dropping puzzler
The indie video game scene is currently dominated by two unassailable genre titans: the rogue-like and the deck-builder. The first is a type of action adventure in which players explore procedurally generated landscapes, where they battle enemies, level up and then die – whereupon they start all over again from scratch. The latter is about building decks of collectible cards (think Pokémon or Magic: The Gathering, but digital) and fighting with them. Titles that combine both in interesting ways – such as Balatro and Slay the Spire – can become huge crossover hits. But the market is getting saturated and so developers are having to find new genres to mix into this potent game design cocktail.
Russia is mocked over plans to launch its own 'Putindo 64' games console in a desperate attempt to shun western technology
In a desperate attempt to shun western technology, President Vladimir Putin has ordered the creation of a new Russian video game console. Now, as a Russian chief admits it won't be as good as the Xbox or the PS5, commentators have flocked to social media to mock the upcoming machine, dubbing it the'Putindo 64'. On Reddit, someone posted: 'The Putindo is gonna be a wild collectors item in a few decades.' Another user posted'I only hyev tetris' in a reference to the famous puzzle video game created in 1985 by Russia's Alexey Pajitnov. Another said: 'what is going on with russian government, why are they trying to develop a console when they are fighting at the front.'
- Leisure & Entertainment > Games > Computer Games (1.00)
- Government > Regional Government > Europe Government > Russia Government (0.78)
- Government > Regional Government > Asia Government > Russia Government (0.78)
Tetris Forever is the real story of Tetris - and it's fascinating
Believe me when I say: I truly thought I knew the story of Tetris. The puzzle game's journey from behind the iron curtain in 1980s Moscow to multi-million-selling video game has been the subject of countless articles, a greatly entertaining book and a recent film. I have played Tetris in various forms for more than 30 years, from the Game Boy to the Nintendo Switch, even in VR. So when I loaded up Tetris Forever, an interactive documentary on Tetris's 40-year history from the developers-slash-archivists at Digital Eclipse, I wasn't expecting to learn anything new. I was proven very wrong.
- Asia > Russia (0.32)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.27)
- Asia > Japan (0.06)
- North America > United States > Hawaii (0.05)
Reviews: TETRIS: TilE-matching the TRemendous Irregular Sparsity
This paper deals with sparsifying the weights of a neural net to reduce memory requirements and speed up inference. Simply pruning small weights yields unstructured sparsity, which is hard to exploit with standard libraries and hardware. This paper imposes block sparsity, where each weight tensor is divided into fixed blocks (of size 32 x 32, for example) and non-zero weights are specified in only a fraction of the blocks. The paper's innovation is an iterative algorithm for reordering the rows and columns of a tensor to group together the large weights, reducing the accuracy loss from block pruning. Experiments on the VGG16 network for ImageNet show that this method achieves better speed-accuracy trade-offs than either unstructured weight pruning or block pruning without reordering. Update: I appreciate the authors' responses.