Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond

Wei, Tianxin, Jin, Bowen, Li, Ruirui, Zeng, Hansi, Wang, Zhengyang, Sun, Jianhui, Yin, Qingyu, Lu, Hanqing, Wang, Suhang, He, Jingrui, Tang, Xianfeng

arXiv.org Artificial Intelligence 

Developing a unified model that can effectively harness heterogeneous resources and respond to a wide range of personalized needs has been a longstanding community aspiration. Our daily choices, especially in domains like fashion and retail, are substantially shaped by multi-modal data, such as pictures and textual descriptions. The vision and language modalities not only offer intuitive guidance but also cater to personalized user preferences. However, the predominant personalization approaches mainly focus on the ID or text-based recommendation problem, failing to comprehend the information spanning various tasks or modalities. In this paper, our goal is to establish a Unified paradigm for Multi-modal Personalization systems (UniMP), which effectively leverages multi-modal data while eliminating the complexities associated with task-and modality-specific customization. We argue that the advancements in foundational generative modeling have provided the flexibility and effectiveness necessary to achieve the objective. In light of this, we develop a generic and extensible personalization generative framework, that can handle a wide range of personalized needs including item recommendation, product search, preference prediction, explanation generation, and further userguided image generation. Our methodology enhances the capabilities of foundational language models for personalized tasks by seamlessly ingesting interleaved vision-language user history information, ensuring a more precise and customized experience for users. To train and evaluate the proposed multi-modal personalized tasks, we also introduce a novel and comprehensive benchmark covering a variety of user requirements. Our experiments on the real-world benchmark showcase the model's potential, outperforming competitive methods specialized for each task. With rapid growth, personalization systems have emerged as a key factor in meeting the user's expectations for tailored experiences that align with their unique needs and preferences. In today's digitally driven landscape, individuals engage with diverse data types, such as ratings, images, descriptions, and prices, especially in domains like fashion and retail (Kang et al., 2017; Hwangbo et al., 2018) where visuals and text are essential for decision-making. Given the profound influence of these multi-modal stimuli, there exists a pressing need for systems that can seamlessly integrate and harness these diverse data streams for improved personalization.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found