Goto

Collaborating Authors

Sensory Optimization: Neural Networks as a Model for Understanding and Creating Art

arXiv.org Artificial Intelligence

This article is about the cognitive science of visual art. Artists create physical artifacts (such as sculptures or paintings) which depict people, objects, and events. These depictions are usually stylized rather than photo-realistic. How is it that humans are able to understand and create stylized representations? Does this ability depend on general cognitive capacities or an evolutionary adaptation for art? What role is played by learning and culture? Machine Learning can shed light on these questions. It's possible to train convolutional neural networks (CNNs) to recognize objects without training them on any visual art. If such CNNs can generalize to visual art (by creating and understanding stylized representations), then CNNs provide a model for how humans could understand art without innate adaptations or cultural learning. I argue that Deep Dream and Style Transfer show that CNNs can create a basic form of visual art, and that humans could create art by similar processes. This suggests that artists make art by optimizing for effects on the human object-recognition system. Physical artifacts are optimized to evoke real-world objects for this system (e.g. to evoke people or landscapes) and to serve as superstimuli for this system.


Unsupervised Learning of Artistic Styles with Archetypal Style Analysis

Neural Information Processing Systems

In this paper, we introduce an unsupervised learning approach to automatically dis- cover, summarize, and manipulate artistic styles from large collections of paintings. Our method is based on archetypal analysis, which is an unsupervised learning technique akin to sparse coding with a geometric interpretation. When applied to deep image representations from a data collection, it learns a dictionary of archetypal styles, which can be easily visualized. After training the model, the style of a new image, which is characterized by local statistics of deep visual features, is approximated by a sparse convex combination of archetypes. This allows us to interpret which archetypal styles are present in the input image, and in which proportion. Finally, our approach allows us to manipulate the coefficients of the latent archetypal decomposition, and achieve various special effects such as style enhancement, transfer, and interpolation between multiple archetypes.


Unsupervised Learning of Artistic Styles with Archetypal Style Analysis

Neural Information Processing Systems

In this paper, we introduce an unsupervised learning approach to automatically dis- cover, summarize, and manipulate artistic styles from large collections of paintings. Our method is based on archetypal analysis, which is an unsupervised learning technique akin to sparse coding with a geometric interpretation. When applied to deep image representations from a data collection, it learns a dictionary of archetypal styles, which can be easily visualized. After training the model, the style of a new image, which is characterized by local statistics of deep visual features, is approximated by a sparse convex combination of archetypes. This allows us to interpret which archetypal styles are present in the input image, and in which proportion. Finally, our approach allows us to manipulate the coefficients of the latent archetypal decomposition, and achieve various special effects such as style enhancement, transfer, and interpolation between multiple archetypes.


Kristen Stewart co-wrote a paper about AI in filmmaking

Daily Mail - Science & tech

Actor, director, model, teen vampire movie star – now Kristen Stewart can add research author to her list of credentials. Stewart, best known for her portrayal of Bella Swan in the Twilight Saga films, has co-written an article published in Cornell University's online library arXiv. It explains the artificial intelligence technique used in her new short film, Come Swim, that enables footage to take on the visual appearance of a painting. Kristen Stewart, best known for her portrayal of Bella Swan in the Twilight Saga films, has co-written an article published in Cornell University's online library arXiv Many popular software programs, such as Adobe Photoshop and GIMP, already provide filters that can make photographs take on the general style of oil paintings, pen sketches, screen-prints or chalk drawings, for example. The algorithms needed to accomplish this first began to produce aesthetically pleasing results in the early 1990s.


An AI just discovered and then painted a hidden Picasso painting – Fanatical Futurist by International Keynote Speaker Matthew Griffin

#artificialintelligence

Neural style transfer was developed in 2015 by Leon Gatys and colleagues at the University of Tubingen in Germany. It comes about from a fascinating insight into the way neural networks learn to recognize images of different kinds. Neural networks consist of layers that analyze an image at different scales. The first layer might recognize broad features like edges, the next layer sees how these edges form simple shapes like circles, the next layer recognizes patterns of shapes, such as two circles close together, and yet another layer might label these pairs of circles as eyes. This kind of network would be able to recognize eyes in paintings in a wide variety of styles, from Leonardo da Vinci to Van Gogh to Picasso.