Zhang, Wenhua
MMM-RS: A Multi-modal, Multi-GSD, Multi-scene Remote Sensing Dataset and Benchmark for Text-to-Image Generation
Luo, Jialin, Wang, Yuanzhi, Gu, Ziqi, Qiu, Yide, Yao, Shuaizhen, Wang, Fuyun, Xu, Chunyan, Zhang, Wenhua, Wang, Dan, Cui, Zhen
Recently, the diffusion-based generative paradigm has achieved impressive general image generation capabilities with text prompts due to its accurate distribution modeling and stable training process. However, generating diverse remote sensing (RS) images that are tremendously different from general images in terms of scale and perspective remains a formidable challenge due to the lack of a comprehensive remote sensing image generation dataset with various modalities, ground sample distances (GSD), and scenes. In this paper, we propose a Multi-modal, Multi-GSD, Multi-scene Remote Sensing (MMM-RS) dataset and benchmark for text-to-image generation in diverse remote sensing scenarios. Specifically, we first collect nine publicly available RS datasets and conduct standardization for all samples. To bridge RS images to textual semantic information, we utilize a large-scale pretrained vision-language model to automatically output text prompts and perform hand-crafted rectification, resulting in information-rich text-image pairs (including multi-modal images). In particular, we design some methods to obtain the images with different GSD and various environments (e.g., low-light, foggy) in a single sample. With extensive manual screening and refining annotations, we ultimately obtain a MMM-RS dataset that comprises approximately 2.1 million text-image pairs. Extensive experimental results verify that our proposed MMM-RS dataset allows off-the-shelf diffusion models to generate diverse RS images across various modalities, scenes, weather conditions, and GSD. The dataset is available at https://github.com/ljl5261/MMM-RS.
CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Graham, Simon, Vu, Quoc Dang, Jahanifar, Mostafa, Weigert, Martin, Schmidt, Uwe, Zhang, Wenhua, Zhang, Jun, Yang, Sen, Xiang, Jinxi, Wang, Xiyue, Rumberger, Josef Lorenz, Baumann, Elias, Hirsch, Peter, Liu, Lihao, Hong, Chenyang, Aviles-Rivero, Angelica I., Jain, Ayushi, Ahn, Heeyoung, Hong, Yiyu, Azzuni, Hussam, Xu, Min, Yaqub, Mohammad, Blache, Marie-Claire, Piégu, Benoît, Vernay, Bertrand, Scherr, Tim, Böhland, Moritz, Löffler, Katharina, Li, Jiachen, Ying, Weiqin, Wang, Chixin, Kainmueller, Dagmar, Schönlieb, Carola-Bibiane, Liu, Shuolin, Talsania, Dhairya, Meda, Yughender, Mishra, Prakash, Ridzuan, Muhammad, Neumann, Oliver, Schilling, Marcel P., Reischl, Markus, Mikut, Ralf, Huang, Banban, Chien, Hsiang-Chin, Wang, Ching-Ping, Lee, Chia-Yen, Lin, Hong-Kun, Liu, Zaiyi, Pan, Xipeng, Han, Chu, Cheng, Jijun, Dawood, Muhammad, Deshpande, Srijay, Bashir, Raja Muhammad Saad, Shephard, Adam, Costa, Pedro, Nunes, João D., Campilho, Aurélio, Cardoso, Jaime S., S, Hrishikesh P, Puthussery, Densen, G, Devika R, C, Jiji V, Zhang, Ye, Fang, Zijie, Lin, Zhifan, Zhang, Yongbing, Lin, Chunhui, Zhang, Liukun, Mao, Lijian, Wu, Min, Vo, Vi Thi-Tuong, Kim, Soo-Hyung, Lee, Taebum, Kondo, Satoshi, Kasai, Satoshi, Dumbhare, Pranay, Phuse, Vedant, Dubey, Yash, Jamthikar, Ankush, Vuong, Trinh Thi Le, Kwak, Jin Tae, Ziaei, Dorsa, Jung, Hyun, Miao, Tianyi, Snead, David, Raza, Shan E Ahmed, Minhas, Fayyaz, Rajpoot, Nasir M.
Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.