Deep Exemplar-based Video Colorization

arXiv.org Artificial Intelligence

This paper presents the first end-to-end network for exemplar-based video colorization. The main challenge is to achieve temporal consistency while remaining faithful to the reference style. To address this issue, we introduce a recurrent framework that unifies the semantic correspondence and color propagation steps. Both steps allow a provided reference image to guide the colorization of every frame, thus reducing accumulated propagation errors. Video frames are colorized in sequence based on the colorization history, and its coherency is further enforced by the temporal consistency loss. All of these components, learned end-to-end, help produce realistic videos with good temporal stability. Experiments show our result is superior to the state-of-the-art methods both quantitatively and qualitatively.


Learning Generative Neural Networks for 3D Colorization

AAAI Conferences

Automatic generation of 3D visual content is a fundamental problem that sits at the intersection of visual computing and artificial intelligence. So far, most existing works have focused on geometry synthesis. In contrast, advances in automatic synthesis of color information, which conveys rich semantic information of 3D geometry, remain rather limited. In this paper, we propose to learn a generative model that maps a latent color parameter space to a space of colorizations across a shape collection. The colorizations are diverse on each shape and consistent across the shape collection. We introduce an unsupervised approach for training this generative model and demonstrate its effectiveness across a wide range of categories. The key feature of our approach is that it only requires one colorization per shape in the training data, and utilizes a neural network to propagate the color information of other shapes to train the generative model for each particular shape. This characteristics makes our approach applicable to standard internet shape repositories.


MetalGAN: a Cluster-based Adaptive Training for Few-Shot Adversarial Colorization

arXiv.org Machine Learning

In recent years, the majority of works on deep-learning-based image colorization have focused on how to make a good use of the enormous datasets currently available. What about when the data at disposal are scarce? The main objective of this work is to prove that a network can be trained and can provide excellent colorization results even without a large quantity of data. The adopted approach is a mixed one, which uses an adversarial method for the actual colorization, and a meta-learning technique to enhance the generator model. Also, a clusterization a-priori of the training dataset ensures a task-oriented division useful for meta-learning, and at the same time reduces the per-step number of images. This paper describes in detail the method and its main motivations, and a discussion of results and future developments is provided.


Unsupervised Diverse Colorization via Generative Adversarial Networks

arXiv.org Artificial Intelligence

Colorization of grayscale images has been a hot topic in computer vision. Previous research mainly focuses on producing a colored image to match the original one. However, since many colors share the same gray value, an input grayscale image could be diversely colored while maintaining its reality. In this paper, we design a novel solution for unsupervised diverse colorization. Specifically, we leverage conditional generative adversarial networks to model the distribution of real-world item colors, in which we develop a fully convolutional generator with multi-layer noise to enhance diversity, with multi-layer condition concatenation to maintain reality, and with stride 1 to keep spatial information. With such a novel network architecture, the model yields highly competitive performance on the open LSUN bedroom dataset. The Turing test of 80 humans further indicates our generated color schemes are highly convincible.


A different kind of (deep) learning: part 1

#artificialintelligence

Deep learning has truly reshuffled things in machine learning field, and specifically in image recognition tasks. In 2012, Alex-net has initiated a (still far from ending) race towards solving, or at least significantly improving, computer vision tasks. Each of these research paths improves training quality (speed, accuracy, sometimes generalization), but it seems that doing more of the same thing may result in some gradual improvements, but not a in significant breakthrough. On the other hand, growing body of work in deep learning shows that there are significant flaws in current methods, especially in terms of generalization, e.g this recent one: generalization failure when objects are rotated: So there seems to be a need of improvements that are a bit more aggressive. Or perhaps expanding the research spectrum to ideas that may be a bit riskier.