Balanced Training for Sparse GANs Yite Wang

Neural Information Processing Systems

Over the past few years, there has been growing interest in developing larger and deeper neural networks, including deep generative models like generative adversarial networks (GANs). However, GANs typically come with high computational complexity, leading researchers to explore methods for reducing the training and inference costs. One such approach gaining popularity in supervised learning is dynamic sparse training (DST), which maintains good performance while enjoying excellent training efficiency. Despite its potential benefits, applying DST to GANs presents challenges due to the adversarial nature of the training process. In this paper, we propose a novel metric called the balance ratio (BR) to study the balance between the sparse generator and discriminator. We also introduce a new method called balanced dynamic sparse training (ADAPT), which seeks to control the BR during GAN training to achieve a good trade-off between performance and computational cost. Our proposed method shows promising results on multiple datasets, demonstrating its effectiveness. Our code is available at https://github.com/YiteWang/ADAPT.


How Google is using Gemini for robotics

Mashable

How Google is using Gemini for robotics The tech giant is advancing ALOHA 2 and teaming up with humanoid robot developers like Apptronik and Boston Dynamics. By Jesse Orrall on May 27, 2025 Share on Facebook Share on Twitter Share on Flipboard Watch Next Android XR Glasses Unveiled at Google I/O 2025 9:02 Everything Unveiled at Google I/O 2025 15:41 Stephen Colbert jokes about Trump using ChatGPT for tariff formula 1X's latest humanoid robot will do your chores in a sweater 1:59 In a recent demo at its I/O developer conference, Google put Gemini AI to work inside the ALOHA 2 robot, showing how language and vision models can help robots perform complex, real-world tasks. Topics Google Robotics Google Gemini Latest Videos'Good Fortune' trailer: Keanu Reeves plays a guardian angel in Aziz Ansari's directorial debut Keanu Reeves as an angel? And, well... 05/24/2025 By Leah Stodart Say More: R.L. Stine on'Fear Street: Prom Queen' and Matt Wolf on'Pee-wee as Himself' From teen-defining terror to a childhood icon, '80s nostalgia thrives. Loading... Subscribe These newsletters may contain advertising, deals, or affiliate links.


The roborock Qrevo Edge robot vacuum is back down to its lowest-ever price

Mashable

SAVE 500: As of May 27, the roborock Qrevo Edge is on sale for 1,099.99 at Amazon. Memorial Day sales are still underway at Amazon, and you'll find a whole lot of discounts on robot vacuums. We can't ignore this incredible deal on the roborock Qrevo Edge, currently down to its lowest-ever price. As of May 27, this impressive vacuum is reduced by 500, now priced at 1099.99. You may be thinking that 1,000 is still a hefty price tag for a vacuum, but with vacuuming and mopping functions, this robot will be a truly useful addition to your home.




A Appendix

Neural Information Processing Systems

A.1 Graph-building strategies The graphs were built using the IsayevNN class from the pymatgen [48] package. It implements the commonly used Voronoi tessalation to define neighbors. Two atoms are considered bonded if they share a face in the Voronoi tessalation of the supercell and their distance is less than the sum of the atomic Cordero radii (a measure of the atomic radius) plus a cutoff =0.5ร…. This value of the cutoff was increase compared to [32] to reduce the number of disconnected graphs. We provide statistics for the graphs obtained by the method described in Section 5. A hard cutoff on atomic distances of 6ร… is also imposed on atomic distances.


Equivariant Networks for Crystal Structures

Neural Information Processing Systems

Supervised learning with deep models has tremendous potential for applications in materials science. Recently, graph neural networks have been used in this context, drawing direct inspiration from models for molecules. However, materials are typically much more structured than molecules, which is a feature that these models do not leverage. In this work, we introduce a class of models that are equivariant with respect to crystalline symmetry groups. We do this by defining a generalization of the message passing operations that can be used with more general permutation groups, or that can alternatively be seen as defining an expressive convolution operation on the crystal graph. Empirically, these models achieve competitive results with state-of-the-art on property prediction tasks.


Gene-Gene Relationship Modeling Based on Genetic Evidence for Single-Cell RNA-Seq Data Imputation

Neural Information Processing Systems

Single-cell RNA sequencing (scRNA-seq) technologies enable the exploration of cellular heterogeneity and facilitate the construction of cell atlases. However, scRNA-seq data often contain a large portion of missing values (false zeros) or noisy values, hindering downstream analyses. To recover these false zeros, propagation-based imputation methods have been proposed using k-NN graphs. However they model only associating relationships among genes within a cell, while, according to well-known genetic evidence, there are both associating and dissociating relationships among genes. To apply this genetic evidence to gene-gene relationship modeling, this paper proposes a novel imputation method that newly employs dissociating relationships in addition to associating relationships. Our method constructs a k-NN graph to additionally model dissociating relationships via the negation of a given cell-gene matrix. Moreover, our method standardizes the value distribution (mean and variance) of each gene to have standard distributions regardless of the gene. Through extensive experiments, we demonstrate that the proposed method achieves exceptional performance gains over state-of-the-art methods in both cell clustering and gene expression recovery across six scRNA-seq datasets, validating the significance of using complete gene-gene relationships in accordance with genetic evidence. The source code is available at https: //github.com/daehoum1/scCR.


Joint Training of Deep Ensembles Fails Due to Learner Collusion

Neural Information Processing Systems

Ensembles of machine learning models have been well established as a powerful method of improving performance over a single model. Traditionally, ensembling algorithms train their base learners independently or sequentially with the goal of optimizing their joint performance. In the case of deep ensembles of neural networks, we are provided with the opportunity to directly optimize the true objective: the joint performance of the ensemble as a whole. Surprisingly, however, directly minimizing the loss of the ensemble appears to rarely be applied in practice. Instead, most previous research trains individual models independently with ensembling performed post hoc.