Goto

Collaborating Authors

 began



Author response: ' Deep Automodulators ' NeurIPS: #6295

Neural Information Processing Systems

We thank the reviewers for their insights. We summarize the overall response as positive. We address the concerns in the order received. There might be a misunderstanding here. The sole concern of R1 is that our model is an "extension of the idea in " However, the provided references [1, 2] do not show that BEGAN could do such a thing.


Black Panthers Artificial Intelligence: How it All Began

#artificialintelligence

How did they create Black Panther? How did Black Panthers change the world? Where do Black Panther's powers come from? What technology did Black Panthers use? Why was Black Panther so successful? How many Black Panthers are there in the world?


Deep UL2DL: Channel Knowledge Transfer from Uplink to Downlink

Safari, Mohammad Sadegh, Pourahmadi, Vahid, Sodagari, Shabnam

arXiv.org Machine Learning

Knowledge of the channel state information (CSI) at the transmitter side is one of the primary sources of information that can be used for efficient allocation of wireless resources. Obtaining Down-Link (DL) CSI in Frequency Division Duplexing (FDD) systems from Up-Link (UL) CSI is not as straightforward as in TDD systems, and so usually users feedback the DL-CSI to the transmitter. To remove the need for feedback (and thus having less signaling overhead), several methods have been studied to estimate DL-CSI from UL-CSI. In this paper, we propose a scheme to infer DL-CSI by observing UL-CSI in which we use two recent deep neural network structures: a) Convolutional Neural networks and b) Generative Adversarial Networks. The proposed deep network structures first learn a latent model of the environment from the training data. Then, the result latent model is used to predict the DL-CSI from the UL-CSI. We have simulated the proposed scheme and evaluated its performance in a few network settings. Simulation results (for different multipath environments) demonstrate efficiency of both direct and generative approaches for UL2DL prediction. One key feature of new generation of cellular networks is their efficient use of frequency bands and energy. To achieve this goal, they use various techniques such as water-filling, appropriate precoding and beamforming. In Time Division Duplexing (TDD) systems, Up-Link (UL) and Down-Link (DL) frequencies are equal, so we can use channel reciprocity and simply infer the DL channel by observing the UL channel.


Generative Imaging and Image Processing via Generative Encoder

Chen, Lin, Yang, Haizhao

arXiv.org Machine Learning

This paper introduces a novel generative encoder (GE) model for generative imaging and image processing with applications in compressed sensing and imaging, image compression, denoising, inpainting, deblurring, and super-resolution. The GE model consists of a pre-training phase and a solving phase. In the pre-training phase, we separately train two deep neural networks: a generative adversarial network (GAN) with a generator $\G$ that captures the data distribution of a given image set, and an auto-encoder (AE) network with an encoder $\EN$ that compresses images following the estimated distribution by GAN. In the solving phase, given a noisy image $x=\mathcal{P}(x^*)$, where $x^*$ is the target unknown image, $\mathcal{P}$ is an operator adding an addictive, or multiplicative, or convolutional noise, or equivalently given such an image $x$ in the compressed domain, i.e., given $m=\EN(x)$, we solve the optimization problem \[ z^*=\underset{z}{\mathrm{argmin}} \|\EN(\G(z))-m\|_2^2+\lambda\|z\|_2^2 \] to recover the image $x^*$ in a generative way via $\hat{x}:=\G(z^*)\approx x^*$, where $\lambda>0$ is a hyperparameter. The GE model unifies the generative capacity of GANs and the stability of AEs in an optimization framework above instead of stacking GANs and AEs into a single network or combining their loss functions into one as in existing literature. Numerical experiments show that the proposed model outperforms several state-of-the-art algorithms.


Are GANs Created Equal? A Large-Scale Study

Lucic, Mario, Kurach, Karol, Michalski, Marcin, Gelly, Sylvain, Bousquet, Olivier

arXiv.org Machine Learning

Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard to assess which algorithm(s) perform better than others. We conduct a neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures. We find that most models can reach similar scores with enough hyperparameter optimization and random restarts. This suggests that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes. To overcome some limitations of the current metrics, we also propose several data sets on which precision and recall can be computed. Our experimental results suggest that future GAN research should be based on more systematic and objective evaluation procedures. Finally, we did not find evidence that any of the tested algorithms consistently outperforms the original one.


Work in progress: Portraits of Imaginary people

@machinelearnbot

For a while now I've been experimenting with ways to use generative neural nets to make portraits. Early experiments were based on deepdream-like approaches using backprop to the image but lately I've focused on GANs. As always resolution and fine detail is really difficult to achieve. For starters the receptive field of thse networks is usually less than 256x256 pixels. One way around this is tiling combined with stacking GANs, which many people have experimented with, for example this paper uses a two-stage GAN to get high resolution: (https://arxiv.org/abs/1612.03242). I tried a similar approach and I've been finally been having some more success upres-ing GAN-generated faces to 768x768 pixels in two stages and in some cases as far as 4k x 4k, using three stages.