Quantifying the Capability Boundary of DeepSeek Models: An Application-Driven Performance Analysis
Lian, Shiguo, Zhao, Kaikai, Lei, Xuejiao, Wang, Ning, Long, Zhenhong, Yang, Peijun, Hua, Minjie, Ma, Chaoyang, Liu, Wen, Wang, Kai, Liu, Zhaoxiang
–arXiv.org Artificial Intelligence
DeepSeek-R1, known for its low training cost and exceptional reasoning capabilities, has achieved state-of-the-art performance on various benchmarks. However, detailed evaluations from the perspective of real-world applications are lacking, making it challenging for users to select the most suitable DeepSeek models for their specific needs. To address this gap, we evaluate the DeepSeek-V3, DeepSeek-R1, DeepSeek-R1-Distill-Qwen series, and DeepSeek-R1-Distill-Llama series on A-Eval, an application-driven benchmark. By comparing original instruction-tuned models with their distilled counterparts, we analyze how reasoning enhancements impact performance across diverse practical tasks. Our results show that reasoning-enhanced models, while generally powerful, do not universally outperform across all tasks, with performance gains varying significantly across tasks and models. To further assist users in model selection, we quantify the capability boundary of DeepSeek models through performance tier classifications and intuitive line charts. Specific examples provide actionable insights to help users select and deploy the most cost-effective DeepSeek models, ensuring optimal performance and resource efficiency in real-world applications.
arXiv.org Artificial Intelligence
Feb-16-2025