Goto

Collaborating Authors

style transfer


Neural Style Transfer Tutorial

#artificialintelligence

Neural Style Transfer is a technique that applies the Style of 1 image to the content of another image. It's a generative algorithm meaning that the algorithm generates an image as the output. As you're probably wondering, how does it work? In this post, we'll be explaining how the vanilla Neural Style Transfer algorithm adds different styles to an image and what makes the algorithm unique and interesting. Both Style Transfer and traditional GANs share the similarity of being able to generate images as the output.


AI for painting: Unraveling Neural Style Transfer

#artificialintelligence

In a world where NFT's are being sold for millions, the next profitable business might be to create unique virtual entities and who better suited for the job than artificial intelligence. In fact, well before the NFT hype, in October 2018, the first AI-generated portrait was sold for $432,500. Since then, people have used their deep knowledge of advanced algorithms to make astounding pieces of art. Refik Anadol, for instance, is an artist who uses AI to create captivating paintings. Another digital artist, Petros Vrellis, put up an interactive animation of Van Gogh's celebrated artwork "Starry Night" in 2012, which reached over 1.5 million views within three months.


The state of creative AI: will video producers/editors get superpowers?

#artificialintelligence

Disruptive innovations begin at the bottom of a market with simple applications, then move up until they displace established ways of working. Today, we are witnessing the entry of Artificial Intelligence (AI) into basic video production. As technology becomes more powerful, the impact of generative AI will increase. In this article I will show examples that are representative of the current state of AI and have the potential to impact the jobs of video producers and editors. Color grading is an art form in itself.


Gradient-guided Unsupervised Text Style Transfer via Contrastive Learning

arXiv.org Artificial Intelligence

Text style transfer is a challenging text generation problem, which aims at altering the style of a given sentence to a target one while keeping its content unchanged. Since there is a natural scarcity of parallel datasets, recent works mainly focus on solving the problem in an unsupervised manner. However, previous gradient-based works generally suffer from the deficiencies as follows, namely: (1) Content migration. Previous approaches lack explicit modeling of content invariance and are thus susceptible to content shift between the original sentence and the transferred one. (2) Style misclassification. A natural drawback of the gradient-guided approaches is that the inference process is homogeneous with a line of adversarial attack, making latent optimization easily becomes an attack to the classifier due to misclassification. This leads to difficulties in achieving high transfer accuracy. To address the problems, we propose a novel gradient-guided model through a contrastive paradigm for text style transfer, to explicitly gather similar semantic sentences, and to design a siamese-structure based style classifier for alleviating such two issues, respectively. Experiments on two datasets show the effectiveness of our proposed approach, as compared to the state-of-the-arts.


Can Machines Generate Personalized Music? A Hybrid Favorite-aware Method for User Preference Music Transfer

arXiv.org Artificial Intelligence

Abstract--User preference music transfer (UPMT) is a new problem in music style transfer that can be applied to many scenarios but remains understudied. Transferring an arbitrary song to fit a user's preferences increases musical diversity and Most music style transfer approaches rely on datadriven methods. In general, however, constructing a large training Figure 1: A demonstration of UPMT: Transferring symbolic input music dataset is challenging because users can rarely provide enough of to new symbolic music that fits a user's preferences based on features their favorite songs. To address this problem, this paper proposes of their favorite music. For example, Marino et al. [17] used prior semantic knowledge in the form of knowledge graphs HERE has been recent growth in research around music style transfer, a technique that transfers the style of to improve image classification performance. Donadello et al. one piece of music to another based on different levels of [18] extracted semantic representations in a knowledge base music representations [1]. Music style transfer is considered to enhance the quality of recommender systems. Despite these important because it increases music variety by reproducing advances, the approaches cannot be directly applied to music, existing music in a creative way.


Real-Time Style Modelling of Human Locomotion via Feature-Wise Transformations and Local Motion Phases

arXiv.org Artificial Intelligence

Controlling the manner in which a character moves in a real-time animation system is a challenging task with useful applications. Existing style transfer systems require access to a reference content motion clip, however, in real-time systems the future motion content is unknown and liable to change with user input. In this work we present a style modelling system that uses an animation synthesis network to model motion content based on local motion phases. An additional style modulation network uses feature-wise transformations to modulate style in real-time. To evaluate our method, we create and release a new style modelling dataset, 100STYLE, containing over 4 million frames of stylised locomotion data in 100 different styles that present a number of challenges for existing systems. To model these styles, we extend the local phase calculation with a contact-free formulation. In comparison to other methods for real-time style modelling, we show our system is more robust and efficient in its style representation while improving motion quality.


Texture Reformer: Towards Fast and Universal Interactive Texture Transfer

arXiv.org Artificial Intelligence

In this paper, we present the texture reformer, a fast and universal neural-based framework for interactive texture transfer with user-specified guidance. The challenges lie in three aspects: 1) the diversity of tasks, 2) the simplicity of guidance maps, and 3) the execution efficiency. To address these challenges, our key idea is to use a novel feed-forward multi-view and multi-stage synthesis procedure consisting of I) a global view structure alignment stage, II) a local view texture refinement stage, and III) a holistic effect enhancement stage to synthesize high-quality results with coherent structures and fine texture details in a coarse-to-fine fashion. In addition, we also introduce a novel learning-free view-specific texture reformation (VSTR) operation with a new semantic map guidance strategy to achieve more accurate semantic-guided and structure-preserved texture transfer. The experimental results on a variety of application scenarios demonstrate the effectiveness and superiority of our framework. And compared with the state-of-the-art interactive texture transfer algorithms, it not only achieves higher quality results but, more remarkably, also is 2-5 orders of magnitude faster. Code is available at https://github.com/EndyWon/Texture-Reformer.


Defending against Model Stealing via Verifying Embedded External Features

arXiv.org Artificial Intelligence

Obtaining a well-trained model involves expensive data collection and training procedures, therefore the model is a valuable intellectual property. Recent studies revealed that adversaries can `steal' deployed models even when they have no training samples and can not get access to the model parameters or structures. Currently, there were some defense methods to alleviate this threat, mostly by increasing the cost of model stealing. In this paper, we explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified \emph{external features}. Specifically, we embed the external features by tempering a few training samples with style transfer. We then train a meta-classifier to determine whether a model is stolen from the victim. This approach is inspired by the understanding that the stolen models should contain the knowledge of features learned by the victim model. We examine our method on both CIFAR-10 and ImageNet datasets. Experimental results demonstrate that our method is effective in detecting different types of model stealing simultaneously, even if the stolen model is obtained via a multi-stage stealing process. The codes for reproducing main results are available at Github (https://github.com/zlh-thu/StealingVerification).


Neural Photometry-guided Visual Attribute Transfer

arXiv.org Artificial Intelligence

We present a deep learning-based method for propagating spatially-varying visual material attributes (e.g. texture maps or image stylizations) to larger samples of the same or similar materials. For training, we leverage images of the material taken under multiple illuminations and a dedicated data augmentation policy, making the transfer robust to novel illumination conditions and affine deformations. Our model relies on a supervised image-to-image translation framework and is agnostic to the transferred domain; we showcase a semantic segmentation, a normal map, and a stylization. Following an image analogies approach, the method only requires the training data to contain the same visual structures as the input guidance. Our approach works at interactive rates, making it suitable for material edit applications. We thoroughly evaluate our learning methodology in a controlled setup providing quantitative measures of performance. Last, we demonstrate that training the model on a single material is enough to generalize to materials of the same type without the need for massive datasets.


ManiFest: Manifold Deformation for Few-shot Image Translation

arXiv.org Artificial Intelligence

Most image-to-image translation methods require a large number of training images, which restricts their applicability. We instead propose ManiFest: a framework for few-shot image translation that learns a context-aware representation of a target domain from a few images only. To enforce feature consistency, our framework learns a style manifold between source and proxy anchor domains (assumed to be composed of large numbers of images). The learned manifold is interpolated and deformed towards the few-shot target domain via patch-based adversarial and feature statistics alignment losses. All of these components are trained simultaneously during a single end-to-end loop. In addition to the general few-shot translation task, our approach can alternatively be conditioned on a single exemplar image to reproduce its specific style. Extensive experiments demonstrate the efficacy of ManiFest on multiple tasks, outperforming the state-of-the-art on all metrics and in both the general- and exemplar-based scenarios. Our code is available at https://github.com/cv-rits/Manifest .