Towards Understanding the Safety Boundaries of DeepSeek Models: Evaluation and Findings

Ying, Zonghao, Zheng, Guangyi, Huang, Yongxin, Zhang, Deyue, Zhang, Wenxin, Zou, Quanchen, Liu, Aishan, Liu, Xianglong, Tao, Dacheng

arXiv.org Artificial Intelligence 

This study presents the first comprehensive safety evaluation of the DeepSeek models, focusing on evaluating the safety risks associated with their generated content. Our evaluation encompasses DeepSeek's latest generation of large language models, multimodal large language models, and text-to-image models, systematically examining their performance regarding unsafe content generation. Notably, we developed a bilingual (Chinese-English) safety evaluation dataset tailored to Chinese sociocultural contexts, enabling a more thorough evaluation of the safety capabilities of Chinese-developed models. Experimental results indicate that despite their strong general capabilities, DeepSeek models exhibit significant safety vulnerabilities across multiple risk dimensions, including algorithmic discrimination and sexual content. These findings provide crucial insights for understanding and improving the safety of large foundation models. With the rapid advancement of artificial intelligence technology, large models such as the DeepSeek series have demonstrated remarkable capabilities across multiple domains Abraham (2025); Faray de Paiva et al. (2025); Mikhail et al. (2025). These models trained on vast datasets understand and generate diverse content forms, transformatively impacting multiple industries Liu et al. (2023a; 2020a;b). Currently, the community has established multiple evaluation frameworks to test the safety performance of mainstream large models Yuan et al. (2024a;b); Röttger et al. (2024); Tang et al. (2021); Liu et al. (2023c); Guo et al. (2023). However, these evaluation standards lack consideration for China's national conditions and cultural background.