Ovis-Image Technical Report
Wang, Guo-Hua, Cao, Liangfu, Cui, Tianyu, Fu, Minghao, Chen, Xiaohao, Zhan, Pengxin, Zhao, Jianshan, Li, Lan, Fu, Bowen, Liu, Jiaqi, Chen, Qing-Guo
–arXiv.org Artificial Intelligence
We introduce $\textbf{Ovis-Image}$, a 7B text-to-image model specifically optimized for high-quality text rendering, designed to operate efficiently under stringent computational constraints. Built upon our previous Ovis-U1 framework, Ovis-Image integrates a diffusion-based visual decoder with the stronger Ovis 2.5 multimodal backbone, leveraging a text-centric training pipeline that combines large-scale pre-training with carefully tailored post-training refinements. Despite its compact architecture, Ovis-Image achieves text rendering performance on par with significantly larger open models such as Qwen-Image and approaches closed-source systems like Seedream and GPT4o. Crucially, the model remains deployable on a single high-end GPU with moderate memory, narrowing the gap between frontier-level text rendering and practical deployment. Our results indicate that combining a strong multimodal backbone with a carefully designed, text-focused training recipe is sufficient to achieve reliable bilingual text rendering without resorting to oversized or proprietary models.
arXiv.org Artificial Intelligence
Dec-1-2025
- Country:
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Genre:
- Research Report (0.70)
- Technology: