Dao, Anh
Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs
Dou, Longxu, Liu, Qian, Zhou, Fan, Chen, Changyu, Wang, Zili, Jin, Ziqi, Liu, Zichen, Zhu, Tongyao, Du, Cunxiao, Yang, Penghui, Wang, Haonan, Liu, Jiaheng, Zhao, Yongchi, Feng, Xiachong, Mao, Xin, Yeung, Man Tsung, Pipatanakul, Kunat, Koto, Fajri, Thu, Min Si, Kydlíček, Hynek, Liu, Zeyi, Lin, Qunshu, Sripaisarnmongkol, Sittipong, Sae-Khow, Kridtaphad, Thongchim, Nirattisai, Konkaew, Taechawat, Borijindargoon, Narong, Dao, Anh, Maneegard, Matichon, Artkaew, Phakphum, Yong, Zheng-Xin, Nguyen, Quan, Phatthiyaphaibun, Wannaphong, Tran, Hoang H., Zhang, Mike, Chen, Shiqi, Pang, Tianyu, Du, Chao, Wan, Xinyi, Lu, Wei, Lin, Min
Sailor2 is a family of cutting-edge multilingual language models for South-East Asian (SEA) languages, available in 1B, 8B, and 20B sizes to suit diverse applications. Building on Qwen2.5, Sailor2 undergoes continuous pre-training on 500B tokens (400B SEA-specific and 100B replay tokens) to support 13 SEA languages while retaining proficiency in Chinese and English. Sailor2-20B model achieves a 50-50 win rate against GPT-4o across SEA languages. We also deliver a comprehensive cookbook on how to develop the multilingual model in an efficient manner, including five key aspects: data curation, pre-training, post-training, model customization and evaluation. We hope that Sailor2 model (Apache 2.0 license) will drive language development in the SEA region, and Sailor2 cookbook will inspire researchers to build more inclusive LLMs for other under-served languages.
Visual Large Language Models for Generalized and Specialized Applications
Li, Yifan, Lai, Zhixin, Bao, Wentao, Tan, Zhen, Dao, Anh, Sui, Kewei, Shen, Jiayi, Liu, Dong, Liu, Huan, Kong, Yu
Visual-language models (VLM) have emerged as a powerful tool for learning a unified embedding space for vision and language. Inspired by large language models, which have demonstrated strong reasoning and multi-task capabilities, visual large language models (VLLMs) are gaining increasing attention for building general-purpose VLMs. Despite the significant progress made in VLLMs, the related literature remains limited, particularly from a comprehensive application perspective, encompassing generalized and specialized applications across vision (image, video, depth), action, and language modalities. In this survey, we focus on the diverse applications of VLLMs, examining their using scenarios, identifying ethics consideration and challenges, and discussing future directions for their development. By synthesizing these contents, we aim to provide a comprehensive guide that will pave the way for future innovations and broader applications of VLLMs. The paper list repository is available: https://github.com/JackYFL/awesome-VLLMs.
LiteGPT: Large Vision-Language Model for Joint Chest X-ray Localization and Classification Task
Le-Duc, Khai, Zhang, Ryan, Nguyen, Ngoc Son, Pham, Tan-Hanh, Dao, Anh, Ngo, Ba Hung, Nguyen, Anh Totti, Hy, Truong-Son
Vision-language models have been extensively explored across a wide range of tasks, achieving satisfactory performance; however, their application in medical imaging remains underexplored. In this work, we propose a unified framework - LiteGPT - for the medical imaging. We leverage multiple pre-trained visual encoders to enrich information and enhance the performance of vision-language models. To the best of our knowledge, this is the first study to utilize vision-language models for the novel task of joint localization and classification in medical images. Besides, we are pioneers in providing baselines for disease localization in chest X-rays. Finally, we set new state-of-the-art performance in the image classification task on the well-benchmarked VinDr-CXR dataset. All code and models are publicly available online: https://github.com/leduckhai/LiteGPT