baroque
Capturing the Flow of Art History
Do we really understand how machine classifies art styles? Historically, art is perceived and interpreted by human eyes and there are always controversial discussions over how people identify and understand art. Historians and general public tend to interpret the subject matter of art through the context of history and social factors. Style, however, is different from subject matter. Given the fact that Style does not correspond to the existence of certain objects in the painting and is mainly related to the form and can be correlated with features at different levels.(Ahmed Elgammal et al. 2018), which makes the identification and classification of the characteristics artwork's style and the "transition" - how it flows and evolves - remains as a challenge for both human and machine. In this project, a series of state-of-art neural networks and manifold learning algorithms are explored to unveil this intriguing topic: How does machine capture and interpret the flow of Art History?
- North America > United States (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > France (0.04)
a collaboration of art and science
Would Machine Learning be Baroque because it takes a different perspective with a technological medium? Would Machine Learning be more than just algorithms and lines of code, but also drama and emotion, as was characteristic in artists during the Baroque period? But most importantly, how do we define what is valuable about art and beauty in this 21st century where AI can create things like never before?
The Shape of Art History in the Eyes of the Machine
Elgammal, Ahmed (Rutgers University) | Liu, Bingchen (Rutgers University) | Kim, Diana (Rutgers University) | Elhoseiny, Mohamed (Facebook AI Research) | Mazzone, Marian (College of Charleston)
How does the machine classify styles in art? And how does it relate to art historians' methods for analyzing style? Several studies showed the ability of the machine to learn and predict styles, such as Renaissance, Baroque, Impressionism, etc., from images of paintings. This implies that the machine can learn an internal representation encoding discriminative features through its visual analysis. However, such a representation is not necessarily interpretable. We conducted a comprehensive study of several of the state-of-the-art convolutional neural networks applied to the task of style classification on 67K images of paintings, and analyzed the learned representation through correlation analysis with concepts derived from art history. Surprisingly, the networks could place the works of art in a smooth temporal arrangement mainly based on learning style labels, without any a priori knowledge of time of creation, the historical time and context of styles, or relations between styles. The learned representations showed that there are a few underlying factors that explain the visual variations of style in art. Some of these factors were found to correlate with style patterns suggested by Heinrich Wölfflin (1846-1945). The learned representations also consistently highlighted certain artists as the extreme distinctive representative of their styles, which quantitatively confirms art historian observations.
The AI artist that can create its own painting style
Scientists have developed an AI artist whose masterpieces could pass off as human-made. The system builds upon earlier techniques to generate art and learn about style through observation, but unlike earlier approaches, the new network also has the ability to become creative. When put to the test, the researchers found that humans could not tell the difference between those created by the system and artwork made by contemporary human artists – and sometimes, the AI-generated images even scored higher. The system builds upon earlier techniques to generate art and learn about style through observation, but unlike earlier approaches, the new network also has the ability to become creative. Like the GAN system, the Creative Adversarial Network (CAN) also uses two sub-networks.