Goto

Collaborating Authors

 heterogeneous




VAEM: a Deep Generative Model for Heterogeneous Mixed Type Data

Neural Information Processing Systems

Deep generative models often perform poorly in real-world applications due to the heterogeneity of natural data sets. Heterogeneity arises from data containing different types of features (categorical, ordinal, continuous, etc.) and features of the same type having different marginal distributions. We propose an extension of variational autoencoders (VAEs) called VAEM to handle such heterogeneous data. VAEM is a deep generative model that is trained in a two stage manner, such that the first stage provides a more uniform representation of the data to the second stage, thereby sidestepping the problems caused by heterogeneous data. We provide extensions of VAEM to handle partially observed data, and demonstrate its performance in data generation, missing data prediction and sequential feature selection tasks. Our results show that VAEM broadens the range of real-world applications where deep generative models can be successfully deployed.


Minibatch vs Local SGD for Heterogeneous Distributed Learning

Neural Information Processing Systems

We analyze Local SGD (aka parallel or federated SGD) and Minibatch SGD in the heterogeneous distributed setting, where each machine has access to stochastic gradient estimates for a different, machine-specific, convex objective; the goal is to optimize w.r.t.~the average objective; and machines can only communicate intermittently. We argue that, (i) Minibatch SGD (even without acceleration) dominates all existing analysis of Local SGD in this setting, (ii) accelerated Minibatch SGD is optimal when the heterogeneity is high, and (iii) present the first upper bound for Local SGD that improves over Minibatch SGD in a non-homogeneous regime.



A Bayesian Approach for Personalized Federated Learning in Heterogeneous Settings

Neural Information Processing Systems

Federated learning (FL), through its privacy-preserving collaborative learning approach, has significantly empowered decentralized devices. However, constraints in either data and/or computational resources among participating clients introduce several challenges in learning, including the inability to train large model architectures, heightened risks of overfitting, and more. In this work, we present a novel FL framework grounded in Bayesian learning to address these challenges. Our approach involves training personalized Bayesian models at each client tailored to the unique complexities of the clients' datasets and efficiently collaborating across these clients. By leveraging Bayesian neural networks and their uncertainty quantification capabilities, our local training procedure robustly learns from small datasets.


Minibatch vs Local SGD for Heterogeneous Distributed Learning

Neural Information Processing Systems

We analyze Local SGD (aka parallel or federated SGD) and Minibatch SGD in the heterogeneous distributed setting, where each machine has access to stochastic gradient estimates for a different, machine-specific, convex objective; the goal is to optimize w.r.t. the average objective; and machines can only communicate intermittently. We argue that, (i) Minibatch SGD (even without acceleration) dominates all existing analysis of Local SGD in this setting, (ii) accelerated Minibatch SGD is optimal when the heterogeneity is high, and (iii) present the first upper bound for Local SGD that improves over Minibatch SGD in a non-homogeneous regime.


Review for NeurIPS paper: VAEM: a Deep Generative Model for Heterogeneous Mixed Type Data

Neural Information Processing Systems

The paper proposes modelling vectors with dimensions having different types (real-valued and categorical) using a two-stage VAE approach. First, a VAE with a 1D latent is trained once for each input dimension to standardize the data. Then a "dependency" VAE is trained on top of the resulting latents to capture the dependence between them. Pros: -The approach is interesting and novel -The idea is simple and seems effective, so might be widely adopted -The paper is well written -VAEM outperforms sensible baselines at generative modelling and a sequential information acquisition task Cons: -It is not explained why the two-stage training approach is a good idea. The fact that joint training tends to perform less well than two-stage training, as reported in the rebuttal, is an important observation that should be discussed and, ideally, explained in the paper.


InfiFusion: A Unified Framework for Enhanced Cross-Model Reasoning via LLM Fusion

Yan, Zhaoyi, Sang, Zhijie, Zhang, Yiming, Fu, Yuhao, He, Baoyi, Zhou, Qi, Di, Yining, Ji, Chunlin, Zhang, Shengyu, Wu, Fei, Yang, Hongxia

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have demonstrated strong performance across various reasoning tasks, yet building a single model that consistently excels across all domains remains challenging. This paper addresses this problem by exploring strategies to integrate multiple domain-specialized models into an efficient pivot model.We propose two fusion strategies to combine the strengths of multiple LLMs: (1) a pairwise, multi-step fusion approach that sequentially distills each source model into the pivot model, followed by a weight merging step to integrate the distilled models into the final model. This method achieves strong performance but requires substantial training effort; and (2) a unified fusion approach that aggregates all source models' outputs simultaneously.To improve the fusion process, we introduce a novel Rate-Skewness Adaptive Fusion (RSAF) technique, which dynamically adjusts top-K ratios during parameter merging for enhanced flexibility and stability.Furthermore, we propose an uncertainty-based weighting method for the unified approach, which dynamically balances the contributions of source models and outperforms other logits/distribution ensemble methods.We achieved accuracy improvements of 9.27%, 8.80%, and 8.89% on the GSM8K, MATH, and HumanEval tasks, respectively.


VAEM: a Deep Generative Model for Heterogeneous Mixed Type Data

Neural Information Processing Systems

Deep generative models often perform poorly in real-world applications due to the heterogeneity of natural data sets. Heterogeneity arises from data containing different types of features (categorical, ordinal, continuous, etc.) and features of the same type having different marginal distributions. We propose an extension of variational autoencoders (VAEs) called VAEM to handle such heterogeneous data. VAEM is a deep generative model that is trained in a two stage manner, such that the first stage provides a more uniform representation of the data to the second stage, thereby sidestepping the problems caused by heterogeneous data. We provide extensions of VAEM to handle partially observed data, and demonstrate its performance in data generation, missing data prediction and sequential feature selection tasks.