Unsupervised or Indirectly Supervised Learning
Reviews: Quantum Wasserstein Generative Adversarial Networks
After rebuttal: Thank you for the rebuttal. It helped me understand the sampling more / evaluating the loss more. Also, as your scheme is not designed to generalize OT to the quantum setting, I am fine that the quantum Wasserstein semimetric does not allow for a general cost function. Based on these and the promising real life experiment mentioned in the rebuttal, I have decided to raise my review to marginally above the acceptance rate. The properties required for a semimetric are shown and furthermore the authors show that it behaves in a smooth way with respect to the quantum states.
Reviews: Quantum Wasserstein Generative Adversarial Networks
Please take into account the new comments brought forward by the new Reviewer. This accept decision is somewhat conditional on the fact that you will include more clearly these references in the final version of the paper. We strongly urge you to do so, and trust you on this, because at this point, without these references and a more clear discussion of what has been considered by other authors, the paper in its current form would be a borderline reject. Please spend at least 1/2 a page clarifying connections with prior quantum W work.
A Unified Knowledge-Distillation and Semi-Supervised Learning Framework to Improve Industrial Ads Delivery Systems
Eghbalzadeh, Hamid, Wang, Yang, Li, Rui, Mo, Yuji, Ding, Qin, Fu, Jiaxiang, Dai, Liang, Gu, Shuo, Noorshams, Nima, Park, Sem, Long, Bo, Feng, Xue
Industrial ads ranking systems conventionally rely on labeled impression data, which leads to challenges such as overfitting, slower incremental gain from model scaling, and biases due to discrepancies between training and serving data. To overcome these issues, we propose a Unified framework for Knowledge-Distillation and Semi-supervised Learning (UKDSL) for ads ranking, empowering the training of models on a significantly larger and more diverse datasets, thereby reducing overfitting and mitigating training-serving data discrepancies. We provide detailed formal analysis and numerical simulations on the inherent miscalibration and prediction bias of multi-stage ranking systems, and show empirical evidence of the proposed framework's capability to mitigate those. Compared to prior work, UKDSL can enable models to learn from a much larger set of unlabeled data, hence, improving the performance while being computationally efficient. Finally, we report the successful deployment of UKDSL in an industrial setting across various ranking models, serving users at multi-billion scale, across various surfaces, geological locations, clients, and optimize for various events, which to the best of our knowledge is the first of its kind in terms of the scale and efficiency at which it operates.
Reviews: Face Reconstruction from Voice using Generative Adversarial Networks
The paper proposes a very novel method that creates an estimate of a face from a voice and works as a supervised method . The reviewers initially were not so convinced and with some disagree. The rebuttal was satisfying so that also one reviewer changed its score from weak rejection to acceptance. Thus, after a discussion with the Senior Area chair, the paper is accepted . This meta-review was reviewed and revised by the Program Chairs, based on discussions with the Senior Area Chair.
Improving realistic semi-supervised learning with doubly robust estimation
Pham, Khiem, Herrmann, Charles, Zabih, Ramin
A major challenge in Semi-Supervised Learning (SSL) is the limited information available about the class distribution in the unlabeled data. In many real-world applications this arises from the prevalence of long-tailed distributions, where the standard pseudo-label approach to SSL is biased towards the labeled class distribution and thus performs poorly on unlabeled data. Existing methods typically assume that the unlabeled class distribution is either known a priori, which is unrealistic in most situations, or estimate it on-the-fly using the pseudo-labels themselves. We propose to explicitly estimate the unlabeled class distribution, which is a finite-dimensional parameter, \emph{as an initial step}, using a doubly robust estimator with a strong theoretical guarantee; this estimate can then be integrated into existing methods to pseudo-label the unlabeled data during training more accurately. Experimental results demonstrate that incorporating our techniques into common pseudo-labeling approaches improves their performance.
Segmentation-Aware Generative Reinforcement Network (GRN) for Tissue Layer Segmentation in 3-D Ultrasound Images for Chronic Low-back Pain (cLBP) Assessment
Zeng, Zixue, Zhao, Xiaoyan, Cartier, Matthew, Yu, Tong, Wang, Jing, Meng, Xin, Sheng, Zhiyu, Satarpour, Maryam, Cormack, John M, Bean, Allison, Nussbaum, Ryan, Maurer, Maya, Landis-Walkenhorst, Emily, Kumbhare, Dinesh, Kim, Kang, Wasan, Ajay, Pu, Jiantao
We introduce a novel segmentation-aware joint training framework called generative reinforcement network (GRN) that integrates segmentation loss feedback to optimize both image generation and segmentation performance in a single stage. An image enhancement technique called segmentation-guided enhancement (SGE) is also developed, where the generator produces images tailored specifically for the segmentation model. Two variants of GRN were also developed, including GRN for sample-efficient learning (GRN-SEL) and GRN for semi-supervised learning (GRN-SSL). GRN's performance was evaluated using a dataset of 69 fully annotated 3D ultrasound scans from 29 subjects. The annotations included six anatomical structures: dermis, superficial fat, superficial fascial membrane (SFM), deep fat, deep fascial membrane (DFM), and muscle. Our results show that GRN-SEL with SGE reduces labeling efforts by up to 70% while achieving a 1.98% improvement in the Dice Similarity Coefficient (DSC) compared to models trained on fully labeled datasets. GRN-SEL alone reduces labeling efforts by 60%, GRN-SSL with SGE decreases labeling requirements by 70%, and GRN-SSL alone by 60%, all while maintaining performance comparable to fully supervised models. These findings suggest the effectiveness of the GRN framework in optimizing segmentation performance with significantly less labeled data, offering a scalable and efficient solution for ultrasound image analysis and reducing the burdens associated with data annotation.
Reviews: A Flexible Generative Framework for Graph-based Semi-supervised Learning
This work employs techniques developed in network science literature, such as latent space model (LSM) and stochastic block model (SBM), to propose a generative model for features X, outputs Y, and graph G, and it uses graph neural networks to approximate the posterior of missing outputs given X, observed Y, and G. This work is a wise combination of recent methods to effectively address the problem of graph-based semi-supervised learning. However, I have some concerns, which are summarized as follows: - Although the paper proposed a new interesting generative method for graph-based semi-supervised learning, it is not super novel, as it employs the other existing methods as the blocks of their method, like LSM, SBM, GCN, GAT. - It seems the generative model is only generative for G given X and Y and by factorizing the other part as p(Y,X) p(Y X) p(X), for p(Y X), it is modeled via a multi-layer perceptron, which is a discriminative model. That is why the authors discard X in all the analyses, like any other discriminative model, and say that everything is conditioned on X. I think this makes the proposed model not fully generative. It is only generative for G but not for X and Y.
A Flexible Generative Framework for Graph-based Semi-supervised Learning
Jiaqi Ma, Weijing Tang, Ji Zhu, Qiaozhu Mei
We consider a family of problems that are concerned about making predictions for the majority of unlabeled, graph-structured data samples based on a small proportion of labeled samples. Relational information among the data samples, often encoded in the graph/network structure, is shown to be helpful for these semi-supervised learning tasks. However, conventional graph-based regularization methods and recent graph neural networks do not fully leverage the interrelations between the features, the graph, and the labels. In this work, we propose a flexible generative framework for graph-based semi-supervised learning, which approaches the joint distribution of the node features, labels, and the graph structure. Borrowing insights from random graph models in network science literature, this joint distribution can be instantiated using various distribution families. For the inference of missing labels, we exploit recent advances of scalable variational inference techniques to approximate the Bayesian posterior. We conduct thorough experiments on benchmark datasets for graph-based semi-supervised learning. Results show that the proposed methods outperform the state-of-the-art models in most settings.
Reviews: A Flexible Generative Framework for Graph-based Semi-supervised Learning
This paper proposes a generative framework for graph-based semi-supervised learning for approximating the joint distribution of the graph structure, labels and the node features. Variational inference techniques are then used to approximate the Bayesian posterior. The paper is well written. There are some issues raised by reviewer 3 regarding a better positioning of GenGNN with respect to GCN/GAT; which are recommended to be taken into account for the final version of the paper.
unclear points and will update the paper accordingly in the final version. architectures from CycleGAN [38]: 9 residual blocks for generator and 4 convolution layers for discriminator. 2. F
We sincerely thank all the reviewers for their insightful comments to help us improve the paper. To Reviewer #2. 1. Are multiple sources more beneficial? This is largely due to the fact that domain gap also exists among different source domains. We will reorganize the layout of Figure 1 in the main paper to make it more clear. We thank the reviewer for pointing this out.