coser
China's AI Boyfriend Business Is Taking On a Life of Its Own
China's AI Boyfriend Business Is Taking On a Life of Its Own Gen Z women in China are all in on digital companionship--even setting up dates with real-world versions of their AI boyfriends. Jade Gu met her boyfriend online. Gu, who's 26 and studies art theory in Beijing, was playing on her phone when she saw Charlie. She was deep in an otome game, a romance-driven video game where women are the protagonists. Some otome players date multiple men simultaneously, but Gu fell for Charlie--a tall, confident character with silver hair.
- Asia > China > Beijing > Beijing (0.25)
- North America > United States > New York (0.04)
- North America > United States > California (0.04)
- (2 more...)
- Leisure & Entertainment (0.67)
- Information Technology (0.47)
CoSER: Coordinating LLM-Based Persona Simulation of Established Roles
Wang, Xintao, Wang, Heng, Zhang, Yifei, Yuan, Xinfeng, Xu, Rui, Huang, Jen-tse, Yuan, Siyu, Guo, Haoran, Chen, Jiangjie, Wang, Wei, Xiao, Yanghua, Zhou, Shuchang
Role-playing language agents (RPLAs) have emerged as promising applications of large language models (LLMs). However, simulating established characters presents a challenging task for RPLAs, due to the lack of authentic character datasets and nuanced evaluation methods using such data. In this paper, we present CoSER, a collection of a high-quality dataset, open models, and an evaluation protocol towards effective RPLAs of established characters. The CoSER dataset covers 17,966 characters from 771 renowned books. It provides authentic dialogues with real-world intricacies, as well as diverse data types such as conversation setups, character experiences and internal thoughts. Drawing from acting methodology, we introduce given-circumstance acting for training and evaluating role-playing LLMs, where LLMs sequentially portray multiple characters in book scenes. Using our dataset, we develop CoSER 8B and CoSER 70B, i.e., advanced open role-playing LLMs built on LLaMA-3.1 models. Extensive experiments demonstrate the value of the CoSER dataset for RPLA training, evaluation and retrieval. Moreover, CoSER 70B exhibits state-of-the-art performance surpassing or matching GPT-4o on our evaluation and three existing benchmarks, i.e., achieving 75.80% and 93.47% accuracy on the InCharacter and LifeChoice benchmarks respectively.
CoSeR: Bridging Image and Language for Cognitive Super-Resolution
Sun, Haoze, Li, Wenbo, Liu, Jianzhuang, Chen, Haoyu, Pei, Renjing, Zou, Xueyi, Yan, Youliang, Yang, Yujiu
Existing super-resolution (SR) models primarily focus on restoring local texture details, often neglecting the global semantic information within the scene. This oversight can lead to the omission of crucial semantic details or the introduction of inaccurate textures during the recovery process. In our work, we introduce the Cognitive Super-Resolution (CoSeR) framework, empowering SR models with the capacity to comprehend low-resolution images. We achieve this by marrying image appearance and language understanding to generate a cognitive embedding, which not only activates prior information from large text-to-image diffusion models but also facilitates the generation of high-quality reference images to optimize the SR process. To further improve image fidelity, we propose a novel condition injection scheme called "All-in-Attention", consolidating all conditional information into a single module. Consequently, our method successfully restores semantically correct and photorealistic details, demonstrating state-of-the-art performance across multiple benchmarks. Code: https://github.com/VINHYU/CoSeR