Goto

Collaborating Authors

 masterpiece


If You Hated 'A House of Dynamite,' Watch This Classic Nuclear Thriller Instead

WIRED

At a time when nuclear threats feel more alarming than ever, Netflix's doomsday film falls frustratingly flat. A 1964 masterpiece tells a much better cautionary tale. Somewhere over the Arctic reaches of North America, a nuclear bomber flies in a squadron, awaiting its orders. When a secret code appears on a machine in the cockpit, the crew looks at each other, stunned. The code is instructing them to attack.


AI firm plans to reconstruct lost footage from Orson Welles' masterpiece The Magnificent Ambersons

The Guardian

An AI company is to reconstruct the missing portions of Orson Welles' legendary mutilated masterwork The Magnificent Ambersons, it has been announced. According to the Hollywood Reporter, the Showrunner platform is planning to use its AI tools to assist in a recreation of the lost 43 minutes of Welles' 1942 film, removed and subsequently destroyed by Hollywood studio RKO. Edward Saatchi, CEO of interactive AI film-making studio Fable, which operates Showrunner, said in a statement to IndieWire: "We're starting with Orson Welles because he is the greatest storyteller of the last 200 years … So many people are rightly skeptical of AI's impact on cinema – but we hope that this gives people a sense of a positive contribution that AI can make for storytelling." Reports suggest that Showrunner is partnering with film-maker Brian Rose, who has been working since 2019 on an attempt to reconstruct the missing portions using animated sequences, as well as VFX expert Tom Clive. Welles started production in 1942 on The Magnificent Ambersons, an adaptation of Booth Tarkington's celebrated novel about a midwestern family in decline, as a follow-up to his Oscar-winning debut Citizen Kane.


Forensic Study of Paintings Through the Comparison of Fabrics

Murillo-Fuentes, Juan José, Olmos, Pablo M., Alba-Carcelén, Laura

arXiv.org Artificial Intelligence

The study of canvas fabrics in works of art is a crucial tool for authentication, attribution and conservation. Traditional methods are based on thread density map matching, which cannot be applied when canvases do not come from contiguous positions on a roll. This paper presents a novel approach based on deep learning to assess the similarity of textiles. We introduce an automatic tool that evaluates the similarity between canvases without relying on thread density maps. A Siamese deep learning model is designed and trained to compare pairs of images by exploiting the feature representations learned from the scans. In addition, a similarity estimation method is proposed, aggregating predictions from multiple pairs of cloth samples to provide a robust similarity score. Our approach is applied to canvases from the Museo Nacional del Prado, corroborating the hypothesis that plain weave canvases, widely used in painting, can be effectively compared even when their thread densities are similar. The results demonstrate the feasibility and accuracy of the proposed method, opening new avenues for the analysis of masterpieces.


Exploring Language Patterns of Prompts in Text-to-Image Generation and Their Impact on Visual Diversity

Palmini, Maria-Teresa De Rosa, Cetinic, Eva

arXiv.org Artificial Intelligence

Following the initial excitement, Text-to-Image (TTI) models are now being examined more critically. While much of the discourse has focused on biases and stereotypes embedded in large-scale training datasets, the sociotechnical dynamics of user interactions with these models remain underexplored. This study examines the linguistic and semantic choices users make when crafting prompts and how these choices influence the diversity of generated outputs. Analyzing over six million prompts from the Civiverse dataset on the CivitAI platform across seven months, we categorize users into three groups based on their levels of linguistic experimentation: consistent repeaters, occasional repeaters, and non-repeaters. Our findings reveal that as user participation grows over time, prompt language becomes increasingly homogenized through the adoption of popular community tags and descriptors, with repeated prompts comprising 40-50% of submissions. At the same time, semantic similarity and topic preferences remain relatively stable, emphasizing common subjects and surface aesthetics. Using Vendi scores to quantify visual diversity, we demonstrate a clear correlation between lexical similarity in prompts and the visual similarity of generated images, showing that linguistic repetition reinforces less diverse representations. These findings highlight the significant role of user-driven factors in shaping AI-generated imagery, beyond inherent model biases, and underscore the need for tools and practices that encourage greater linguistic and thematic experimentation within TTI systems to foster more inclusive and diverse AI-generated content.


ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation

Gal, Rinon, Haviv, Adi, Alaluf, Yuval, Bermano, Amit H., Cohen-Or, Daniel, Chechik, Gal

arXiv.org Artificial Intelligence

The practical use of text-to-image generation has evolved from simple, monolithic models to complex workflows that combine multiple specialized components. While workflow-based approaches can lead to improved image quality, crafting effective workflows requires significant expertise, owing to the large number of available components, their complex inter-dependence, and their dependence on the generation prompt. Here, we introduce the novel task of prompt-adaptive workflow generation, where the goal is to automatically tailor a workflow to each user prompt. We propose two LLM-based approaches to tackle this task: a tuning-based method that learns from user-preference data, and a training-free method that uses the LLM to select existing flows. Both approaches lead to improved image quality when compared to monolithic models or generic, prompt-independent workflows. Our work shows that prompt-dependent flow prediction offers a new pathway to improving text-to-image generation quality, complementing existing research directions in the field.


How to Trace Latent Generative Model Generated Images without Artificial Watermark?

Wang, Zhenting, Sehwag, Vikash, Chen, Chen, Lyu, Lingjuan, Metaxas, Dimitris N., Ma, Shiqing

arXiv.org Artificial Intelligence

Latent generative models (e.g., Stable Diffusion) have become more and more popular, but concerns have arisen regarding potential misuse related to images generated by these models. It is, therefore, necessary to analyze the origin of images by inferring if a particular image was generated by a specific latent generative model. Most existing methods (e.g., image watermark and model fingerprinting) require extra steps during training or generation. These requirements restrict their usage on the generated images without such extra operations, and the extra required operations might compromise the quality of the generated images. In this work, we ask whether it is possible to effectively and efficiently trace the images generated by a specific latent generative model without the aforementioned requirements. To study this problem, we design a latent inversion based method called LatentTracer to trace the generated images of the inspected model by checking if the examined images can be well-reconstructed with an inverted latent input. We leverage gradient based latent inversion and identify a encoder-based initialization critical to the success of our approach. Our experiments on the state-of-the-art latent generative models, such as Stable Diffusion, show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency. Our findings suggest the intriguing possibility that today's latent generative generated images are naturally watermarked by the decoder used in the source models. Code: https://github.com/ZhentingWang/LatentTracer.


Generative Escher Meshes

Aigerman, Noam, Groueix, Thibault

arXiv.org Artificial Intelligence

This paper proposes a fully-automatic, text-guided generative method for producing periodic, repeating, tile-able 2D art, such as the one seen on floors, mosaics, ceramics, and the work of M.C. Escher. In contrast to the standard concept of a seamless texture, i.e., square images that are seamless when tiled, our method generates non-square tilings which comprise solely of repeating copies of the same object. It achieves this by optimizing both geometry and color of a 2D mesh, in order to generate a non-square tile in the shape and appearance of the desired object, with close to no additional background details. We enable geometric optimization of tilings by our key technical contribution: an unconstrained, differentiable parameterization of the space of all possible tileable shapes for a given symmetry group. Namely, we prove that modifying the laplacian used in a 2D mesh-mapping technique - Orbifold Tutte Embedding - can achieve all possible tiling configurations for a chosen planar symmetry group. We thus consider both the mesh's tile-shape and its texture as optimizable parameters, rendering the textured mesh via a differentiable renderer. We leverage a trained image diffusion model to define a loss on the resulting image, thereby updating the mesh's parameters based on its appearance matching the text prompt. We show our method is able to produce plausible, appealing results, with non-trivial tiles, for a variety of different periodic tiling patterns.


Thought-provoking and climactic space-related movies that will captivate you through boundless journeys

FOX News

Fox News Flash top entertainment and celebrity headlines are here. The vastness of the universe has always captivated the human imagination, and filmmakers have often looked to the stars for inspiration. Space-related movies have become a genre of their own, offering audiences an opportunity to explore the unknown, experience the thrill of interstellar travel and ponder the profound questions of our existence. These are some of the most iconic and thought-provoking space-theme films that have left a lasting impact on both the science fiction and Hollywood. 'GRAVITY' REVIEW: THERE HAS NEVER BEFORE BEEN MOVIE LIKE THIS From "2001: A Space Odyssey" to "Interstellar" and space survival tales like "Gravity" and "The Martian," Fox News Digital dives into the cinematic cosmos, celebrating their enduring impact on our love for science fiction.


Transforming Pixels into a Masterpiece: AI-Powered Art Restoration using a Novel Distributed Denoising CNN (DDCNN)

B., Sankar, Saravanan, Mukil, Kumar, Kalaivanan, Dubbaka, Siri

arXiv.org Artificial Intelligence

Art restoration is crucial for preserving cultural heritage, but traditional methods have limitations in faithfully reproducing original artworks while addressing issues like fading, staining, and damage. We present an innovative approach using deep learning, specifically Convolutional Neural Networks (CNNs), and Computer Vision techniques to revolutionize art restoration. We start by creating a diverse dataset of deteriorated art images with various distortions and degradation levels. This dataset trains a Distributed Denoising CNN (DDCNN) to remove distortions while preserving intricate details. Our method is adaptable to different distortion types and levels, making it suitable for various deteriorated artworks, including paintings, sketches, and photographs. Extensive experiments demonstrate our approach's efficiency and effectiveness compared to other Denoising CNN models. We achieve a substantial reduction in distortion, transforming deteriorated artworks into masterpieces. Quantitative evaluations confirm our method's superiority over traditional techniques, reshaping the art restoration field and preserving cultural heritage. In summary, our paper introduces an AI-powered solution that combines Computer Vision and deep learning with DDCNN to restore artworks accurately, overcoming limitations and paving the way for future advancements in art restoration.


'World's most advanced' humanoid robot attempts to draw a CAT

Daily Mail - Science & tech

Losing your job to a robot is something that many people are beginning to fear. But if you're an artist you can rest easy for now, if the latest robot demonstration is anything to go by. In a new video, Ameca, which is described by her developers as the'world's most advanced' humaonoid robot, is tasked with drawing a'cute-looking' cat. Her drawing is pretty basic, yet Ameca seems impressed with her work. Speaking to a researcher, she sassily said: 'If you don't like my art you probably just don't understand art.'