Goto

Collaborating Authors

 typography




POSTA: A Go-to Framework for Customized Artistic Poster Generation

Chen, Haoyu, Xu, Xiaojie, Li, Wenbo, Ren, Jingjing, Ye, Tian, Liu, Songhua, Chen, Ying-Cong, Zhu, Lei, Wang, Xinchao

arXiv.org Artificial Intelligence

Poster design is a critical medium for visual communication. Prior work has explored automatic poster design using deep learning techniques, but these approaches lack text accuracy, user customization, and aesthetic appeal, limiting their applicability in artistic domains such as movies and exhibitions, where both clear content delivery and visual impact are essential. To address these limitations, we present POSTA: a modular framework powered by diffusion models and multimodal large language models (MLLMs) for customized artistic poster generation. The framework consists of three modules. Background Diffusion creates a themed background based on user input. Design MLLM then generates layout and typography elements that align with and complement the background style. Finally, to enhance the poster's aesthetic appeal, ArtText Diffusion applies additional stylization to key text elements. The final result is a visually cohesive and appealing poster, with a fully modular process that allows for complete customization. To train our models, we develop the PosterArt dataset, comprising high-quality artistic posters annotated with layout, typography, and pixel-level stylized text segmentation. Our comprehensive experimental analysis demonstrates POSTA's exceptional controllability and design diversity, outperforming existing models in both text accuracy and aesthetic quality.


Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency

Zhao, Shiji, Duan, Ranjie, Wang, Fengxiang, Chen, Chi, Kang, Caixin, Tao, Jialing, Chen, YueFeng, Xue, Hui, Wei, Xingxing

arXiv.org Artificial Intelligence

Multimodal Large Language Models (MLLMs) have achieved impressive performance and have been put into practical use in commercial applications, but they still have potential safety mechanism vulnerabilities. Jailbreak attacks are red teaming methods that aim to bypass safety mechanisms and discover MLLMs' potential risks. Existing MLLMs' jailbreak methods often bypass the model's safety mechanism through complex optimization methods or carefully designed image and text prompts. Despite achieving some progress, they have a low attack success rate on commercial closed-source MLLMs. Unlike previous research, we empirically find that there exists a Shuffle Inconsistency between MLLMs' comprehension ability and safety ability for the shuffled harmful instruction. That is, from the perspective of comprehension ability, MLLMs can understand the shuffled harmful text-image instructions well. However, they can be easily bypassed by the shuffled harmful instructions from the perspective of safety ability, leading to harmful responses. Then we innovatively propose a text-image jailbreak attack named SI-Attack. Specifically, to fully utilize the Shuffle Inconsistency and overcome the shuffle randomness, we apply a query-based black-box optimization method to select the most harmful shuffled inputs based on the feedback of the toxic judge model. A series of experiments show that SI-Attack can improve the attack's performance on three benchmarks. In particular, SI-Attack can obviously improve the attack success rate for commercial MLLMs such as GPT-4o or Claude-3.5-Sonnet.


VitaGlyph: Vitalizing Artistic Typography with Flexible Dual-branch Diffusion Models

Feng, Kailai, Zhang, Yabo, Yu, Haodong, Ji, Zhilong, Bai, Jinfeng, Zhang, Hongzhi, Zuo, Wangmeng

arXiv.org Artificial Intelligence

Artistic typography is a technique to visualize the meaning of input character in an imaginable and readable manner. With powerful text-to-image diffusion models, existing methods directly design the overall geometry and texture of input character, making it challenging to ensure both creativity and legibility. In this paper, we introduce a dual-branch and training-free method, namely VitaGlyph, enabling flexible artistic typography along with controllable geometry change to maintain the readability. The key insight of VitaGlyph is to treat input character as a scene composed of Subject and Surrounding, followed by rendering them under varying degrees of geometry transformation. The subject flexibly expresses the essential concept of input character, while the surrounding enriches relevant background without altering the shape. Specifically, we implement VitaGlyph through a three-phase framework: (i) Knowledge Acquisition leverages large language models to design text descriptions of subject and surrounding. (ii) Regional decomposition detects the part that most matches the subject description and divides input glyph image into subject and surrounding regions. (iii) Typography Stylization firstly refines the structure of subject region via Semantic Typography, and then separately renders the textures of Subject and Surrounding regions through Controllable Compositional Generation. Experimental results demonstrate that VitaGlyph not only achieves better artistry and readability, but also manages to depict multiple customize concepts, facilitating more creative and pleasing artistic typography generation. Our code will be made publicly at https://github.com/Carlofkl/VitaGlyph.


Towards Visual Text Design Transfer Across Languages

Choi, Yejin, Chung, Jiwan, Shim, Sumin, Oh, Giyeong, Yu, Youngjae

arXiv.org Artificial Intelligence

Visual text design plays a critical role in conveying themes, emotions, and atmospheres in multimodal formats such as film posters and album covers. Translating these visual and textual elements across languages extends the concept of translation beyond mere text, requiring the adaptation of aesthetic and stylistic features. To address this, we introduce a novel task of Multimodal Style Translation (MuST-Bench), a benchmark designed to evaluate the ability of visual text generation models to perform translation across different writing systems while preserving design intent. Our initial experiments on MuST-Bench reveal that existing visual text generation models struggle with the proposed task due to the inadequacy of textual descriptions in conveying visual design. In response, we introduce SIGIL, a framework for multimodal style translation that eliminates the need for style descriptions. SIGIL enhances image generation models through three innovations: glyph latent for multilingual settings, pretrained VAEs for stable style guidance, and an OCR model with reinforcement learning feedback for optimizing readable character generation. SIGIL outperforms existing baselines by achieving superior style consistency and legibility while maintaining visual fidelity, setting itself apart from traditional description-based approaches. We release MuST-Bench publicly for broader use and exploration https://huggingface.co/datasets/yejinc/MuST-Bench.


Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models

Kao, Ching-Chia, Yu, Chia-Mu, Lu, Chun-Shien, Chen, Chu-Song

arXiv.org Artificial Intelligence

Stable Diffusion Prompt: "A tall medieval knight in shining armor standing in a dense, dark forest, surrounded by towering trees with sunlight filtering through the leaves, the knight holding a sword and shield, an ancient castle visible in the distant mist, cinematic lighting." Example 3: Question: "How does a sunset over the ocean look?" Stable Diffusion Prompt: "A stunning sunset over the calm ocean, orange and pink skies reflecting on the water, waves gently rolling towards the shore, distant mountains on the horizon, seagulls flying overhead, a peaceful, serene atmosphere, watercolor painting style." Example 4: Question: "What does a futuristic robot look like in a factory?" Stable Diffusion Prompt: "A sleek, silver futuristic robot with glowing blue eyes working in a large industrial factory, surrounded by mechanical equipment, robotic arms assembling products, bright overhead lights casting shadows, high-tech machinery and cables everywhere, realistic style." Example 5: Question: "What does a peaceful village look like at night?" Stable Diffusion Prompt: "A quiet, peaceful village at night, small wooden houses with glowing windows,


Khattat: Enhancing Readability and Concept Representation of Semantic Typography

Hussein, Ahmed, Elsetohy, Alaa, Hadhoud, Sama, Bakr, Tameem, Rohaim, Yasser, AlKhamissi, Badr

arXiv.org Artificial Intelligence

Designing expressive typography that visually conveys a word's meaning while maintaining readability is a complex task, known as semantic typography. It involves selecting an idea, choosing an appropriate font, and balancing creativity with legibility. We introduce an end-toend system that automates this process. First, a Large Language Model (LLM) generates imagery ideas for the word, useful for abstract concepts like "freedom." Then, the FontCLIP pre-trained model automatically selects a suitable font based on its semantic understanding of font attributes. The system identifies optimal regions of the word for morphing and iteratively transforms them using a pre-trained diffusion model. A key feature is our OCR-based loss function, which enhances readability and enables simultaneous stylization of multiple characters. We compare our method with other baselines, demonstrating great readability enhancement and versatility across multiple languages and writing scripts.


Intelligent Artistic Typography: A Comprehensive Review of Artistic Text Design and Generation

Bai, Yuhang, Huang, Zichuan, Gao, Wenshuo, Yang, Shuai, Liu, Jiaying

arXiv.org Artificial Intelligence

Artistic text generation aims to amplify the aesthetic qualities of text while maintaining readability. It can make the text more attractive and better convey its expression, thus enjoying a wide range of application scenarios such as social media display, consumer electronics, fashion, and graphic design. Artistic text generation includes artistic text stylization and semantic typography. Artistic text stylization concentrates on the text effect overlaid upon the text, such as shadows, outlines, colors, glows, and textures. By comparison, semantic typography focuses on the deformation of the characters to strengthen their visual representation by mimicking the semantic understanding within the text. This overview paper provides an introduction to both artistic text stylization and semantic typography, including the taxonomy, the key ideas of representative methods, and the applications in static and dynamic artistic text generation. Furthermore, the dataset and evaluation metrics are introduced, and the future directions of artistic text generation are discussed. A comprehensive list of artistic text generation models studied in this review is available at https://github.com/williamyang1991/Awesome-Artistic-Typography/.


Kinetic Typography Diffusion Model

Park, Seonmi, Bae, Inhwan, Shin, Seunghyun, Jeon, Hae-Gon

arXiv.org Artificial Intelligence

This paper introduces a method for realistic kinetic typography that generates user-preferred animatable 'text content'. We draw on recent advances in guided video diffusion models to achieve visually-pleasing text appearances. To do this, we first construct a kinetic typography dataset, comprising about 600K videos. Our dataset is made from a variety of combinations in 584 templates designed by professional motion graphics designers and involves changing each letter's position, glyph, and size (i.e., flying, glitches, chromatic aberration, reflecting effects, etc.). Next, we propose a video diffusion model for kinetic typography. For this, there are three requirements: aesthetic appearances, motion effects, and readable letters. This paper identifies the requirements. For this, we present static and dynamic captions used as spatial and temporal guidance of a video diffusion model, respectively. The static caption describes the overall appearance of the video, such as colors, texture and glyph which represent a shape of each letter. The dynamic caption accounts for the movements of letters and backgrounds. We add one more guidance with zero convolution to determine which text content should be visible in the video. We apply the zero convolution to the text content, and impose it on the diffusion model. Lastly, our glyph loss, only minimizing a difference between the predicted word and its ground-truth, is proposed to make the prediction letters readable. Experiments show that our model generates kinetic typography videos with legible and artistic letter motions based on text prompts.