Wei, Donglai
"See the World, Discover Knowledge": A Chinese Factuality Evaluation for Large Vision Language Models
Gu, Jihao, Wang, Yingyao, Bu, Pi, Wang, Chen, Wang, Ziming, Song, Tengtao, Wei, Donglai, Yuan, Jiale, Zhao, Yingxiu, He, Yancheng, Li, Shilong, Liu, Jiaheng, Cao, Meng, Song, Jun, Tan, Yingshui, Li, Xiang, Su, Wenbo, Zheng, Zhicheng, Zhu, Xiaoyong, Zheng, Bo
The evaluation of factual accuracy in large vision language models (LVLMs) has lagged behind their rapid development, making it challenging to fully reflect these models' knowledge capacity and reliability. In this paper, we introduce the first factuality-based visual question-answering benchmark in Chinese, named ChineseSimpleVQA, aimed at assessing the visual factuality of LVLMs across 8 major topics and 56 subtopics. The key features of this benchmark include a focus on the Chinese language, diverse knowledge types, a multi-hop question construction, high-quality data, static consistency, and easy-to-evaluate through short answers. Moreover, we contribute a rigorous data construction pipeline and decouple the visual factuality into two parts: seeing the world (i.e., object recognition) and discovering knowledge. This decoupling allows us to analyze the capability boundaries and execution mechanisms of LVLMs. Subsequently, we evaluate 34 advanced open-source and closed-source models, revealing critical performance gaps within this field.
Learning Gaze-aware Compositional GAN
Aranjuelo, Nerea, Huang, Siyu, Arganda-Carreras, Ignacio, Unzueta, Luis, Otaegui, Oihana, Pfister, Hanspeter, Wei, Donglai
Gaze-annotated facial data is crucial for training deep neural networks (DNNs) for gaze estimation. However, obtaining these data is labor-intensive and requires specialized equipment due to the challenge of accurately annotating the gaze direction of a subject. In this work, we present a generative framework to create annotated gaze data by leveraging the benefits of labeled and unlabeled data sources. We propose a Gaze-aware Compositional GAN that learns to generate annotated facial images from a limited labeled dataset. Then we transfer this model to an unlabeled data domain to take advantage of the diversity it provides. Experiments demonstrate our approach's effectiveness in generating within-domain image augmentations in the ETH-XGaze dataset and cross-domain augmentations in the CelebAMask-HQ dataset domain for gaze estimation DNN training. We also show additional applications of our work, which include facial image editing and gaze redirection.
Deep Rib Fracture Instance Segmentation and Classification from CT on the RibFrac Challenge
Yang, Jiancheng, Shi, Rui, Jin, Liang, Huang, Xiaoyang, Kuang, Kaiming, Wei, Donglai, Gu, Shixuan, Liu, Jianying, Liu, Pengfei, Chai, Zhizhong, Xiao, Yongjie, Chen, Hao, Xu, Liming, Du, Bang, Yan, Xiangyi, Tang, Hao, Alessio, Adam, Holste, Gregory, Zhang, Jiapeng, Wang, Xiaoming, He, Jianye, Che, Lixuan, Pfister, Hanspeter, Li, Ming, Ni, Bingbing
Rib fractures are a common and potentially severe injury that can be challenging and labor-intensive to detect in CT scans. While there have been efforts to address this field, the lack of large-scale annotated datasets and evaluation benchmarks has hindered the development and validation of deep learning algorithms. To address this issue, the RibFrac Challenge was introduced, providing a benchmark dataset of over 5,000 rib fractures from 660 CT scans, with voxel-level instance mask annotations and diagnosis labels for four clinical categories (buckle, nondisplaced, displaced, or segmental). The challenge includes two tracks: a detection (instance segmentation) track evaluated by an FROC-style metric and a classification track evaluated by an F1-style metric. During the MICCAI 2020 challenge period, 243 results were evaluated, and seven teams were invited to participate in the challenge summary. The analysis revealed that several top rib fracture detection solutions achieved performance comparable or even better than human experts. Nevertheless, the current rib fracture classification solutions are hardly clinically applicable, which can be an interesting area in the future. As an active benchmark and research resource, the data and online evaluation of the RibFrac Challenge are available at the challenge website. As an independent contribution, we have also extended our previous internal baseline by incorporating recent advancements in large-scale pretrained networks and point-based rib segmentation techniques. The resulting FracNet+ demonstrates competitive performance in rib fracture detection, which lays a foundation for further research and development in AI-assisted rib fracture detection and diagnosis.
Efficient Anatomical Labeling of Pulmonary Tree Structures via Implicit Point-Graph Networks
Xie, Kangxian, Yang, Jiancheng, Wei, Donglai, Weng, Ziqiao, Fua, Pascal
Pulmonary diseases rank prominently among the principal causes of death worldwide. Curing them will require, among other things, a better understanding of the many complex 3D tree-shaped structures within the pulmonary system, such as airways, arteries, and veins. In theory, they can be modeled using high-resolution image stacks. Unfortunately, standard CNN approaches operating on dense voxel grids are prohibitively expensive. To remedy this, we introduce a point-based approach that preserves graph connectivity of tree skeleton and incorporates an implicit surface representation. It delivers SOTA accuracy at a low computational cost and the resulting models have usable surfaces. Due to the scarcity of publicly accessible data, we have also curated an extensive dataset to evaluate our approach and will make it public.
Biomedical image analysis competitions: The state of current participation practice
Eisenmann, Matthias, Reinke, Annika, Weru, Vivienn, Tizabi, Minu Dietlinde, Isensee, Fabian, Adler, Tim J., Godau, Patrick, Cheplygina, Veronika, Kozubek, Michal, Ali, Sharib, Gupta, Anubha, Kybic, Jan, Noble, Alison, de Solรณrzano, Carlos Ortiz, Pachade, Samiksha, Petitjean, Caroline, Sage, Daniel, Wei, Donglai, Wilden, Elizabeth, Alapatt, Deepak, Andrearczyk, Vincent, Baid, Ujjwal, Bakas, Spyridon, Balu, Niranjan, Bano, Sophia, Bawa, Vivek Singh, Bernal, Jorge, Bodenstedt, Sebastian, Casella, Alessandro, Choi, Jinwook, Commowick, Olivier, Daum, Marie, Depeursinge, Adrien, Dorent, Reuben, Egger, Jan, Eichhorn, Hannah, Engelhardt, Sandy, Ganz, Melanie, Girard, Gabriel, Hansen, Lasse, Heinrich, Mattias, Heller, Nicholas, Hering, Alessa, Huaulmรฉ, Arnaud, Kim, Hyunjeong, Landman, Bennett, Li, Hongwei Bran, Li, Jianning, Ma, Jun, Martel, Anne, Martรญn-Isla, Carlos, Menze, Bjoern, Nwoye, Chinedu Innocent, Oreiller, Valentin, Padoy, Nicolas, Pati, Sarthak, Payette, Kelly, Sudre, Carole, van Wijnen, Kimberlin, Vardazaryan, Armine, Vercauteren, Tom, Wagner, Martin, Wang, Chuanbo, Yap, Moi Hoon, Yu, Zeyun, Yuan, Chun, Zenk, Maximilian, Zia, Aneeq, Zimmerer, David, Bao, Rina, Choi, Chanyeol, Cohen, Andrew, Dzyubachyk, Oleh, Galdran, Adrian, Gan, Tianyuan, Guo, Tianqi, Gupta, Pradyumna, Haithami, Mahmood, Ho, Edward, Jang, Ikbeom, Li, Zhili, Luo, Zhengbo, Lux, Filip, Makrogiannis, Sokratis, Mรผller, Dominik, Oh, Young-tack, Pang, Subeen, Pape, Constantin, Polat, Gorkem, Reed, Charlotte Rosalie, Ryu, Kanghyun, Scherr, Tim, Thambawita, Vajira, Wang, Haoyu, Wang, Xinliang, Xu, Kele, Yeh, Hung, Yeo, Doyeob, Yuan, Yixuan, Zeng, Yan, Zhao, Xin, Abbing, Julian, Adam, Jannes, Adluru, Nagesh, Agethen, Niklas, Ahmed, Salman, Khalil, Yasmina Al, Alenyร , Mireia, Alhoniemi, Esa, An, Chengyang, Anwar, Talha, Arega, Tewodros Weldebirhan, Avisdris, Netanell, Aydogan, Dogu Baran, Bai, Yingbin, Calisto, Maria Baldeon, Basaran, Berke Doga, Beetz, Marcel, Bian, Cheng, Bian, Hao, Blansit, Kevin, Bloch, Louise, Bohnsack, Robert, Bosticardo, Sara, Breen, Jack, Brudfors, Mikael, Brรผngel, Raphael, Cabezas, Mariano, Cacciola, Alberto, Chen, Zhiwei, Chen, Yucong, Chen, Daniel Tianming, Cho, Minjeong, Choi, Min-Kook, Xie, Chuantao Xie Chuantao, Cobzas, Dana, Cohen-Adad, Julien, Acero, Jorge Corral, Das, Sujit Kumar, de Oliveira, Marcela, Deng, Hanqiu, Dong, Guiming, Doorenbos, Lars, Efird, Cory, Escalera, Sergio, Fan, Di, Serj, Mehdi Fatan, Fenneteau, Alexandre, Fidon, Lucas, Filipiak, Patryk, Finzel, Renรฉ, Freitas, Nuno R., Friedrich, Christoph M., Fulton, Mitchell, Gaida, Finn, Galati, Francesco, Galazis, Christoforos, Gan, Chang Hee, Gao, Zheyao, Gao, Shengbo, Gazda, Matej, Gerats, Beerend, Getty, Neil, Gibicar, Adam, Gifford, Ryan, Gohil, Sajan, Grammatikopoulou, Maria, Grzech, Daniel, Gรผley, Orhun, Gรผnnemann, Timo, Guo, Chunxu, Guy, Sylvain, Ha, Heonjin, Han, Luyi, Han, Il Song, Hatamizadeh, Ali, He, Tian, Heo, Jimin, Hitziger, Sebastian, Hong, SeulGi, Hong, SeungBum, Huang, Rian, Huang, Ziyan, Huellebrand, Markus, Huschauer, Stephan, Hussain, Mustaffa, Inubushi, Tomoo, Polat, Ece Isik, Jafaritadi, Mojtaba, Jeong, SeongHun, Jian, Bailiang, Jiang, Yuanhong, Jiang, Zhifan, Jin, Yueming, Joshi, Smriti, Kadkhodamohammadi, Abdolrahim, Kamraoui, Reda Abdellah, Kang, Inha, Kang, Junghwa, Karimi, Davood, Khademi, April, Khan, Muhammad Irfan, Khan, Suleiman A., Khantwal, Rishab, Kim, Kwang-Ju, Kline, Timothy, Kondo, Satoshi, Kontio, Elina, Krenzer, Adrian, Kroviakov, Artem, Kuijf, Hugo, Kumar, Satyadwyoom, La Rosa, Francesco, Lad, Abhi, Lee, Doohee, Lee, Minho, Lena, Chiara, Li, Hao, Li, Ling, Li, Xingyu, Liao, Fuyuan, Liao, KuanLun, Oliveira, Arlindo Limede, Lin, Chaonan, Lin, Shan, Linardos, Akis, Linguraru, Marius George, Liu, Han, Liu, Tao, Liu, Di, Liu, Yanling, Lourenรงo-Silva, Joรฃo, Lu, Jingpei, Lu, Jiangshan, Luengo, Imanol, Lund, Christina B., Luu, Huan Minh, Lv, Yi, Lv, Yi, Macar, Uzay, Maechler, Leon, L., Sina Mansour, Marshall, Kenji, Mazher, Moona, McKinley, Richard, Medela, Alfonso, Meissen, Felix, Meng, Mingyuan, Miller, Dylan, Mirjahanmardi, Seyed Hossein, Mishra, Arnab, Mitha, Samir, Mohy-ud-Din, Hassan, Mok, Tony Chi Wing, Murugesan, Gowtham Krishnan, Karthik, Enamundram Naga, Nalawade, Sahil, Nalepa, Jakub, Naser, Mohamed, Nateghi, Ramin, Naveed, Hammad, Nguyen, Quang-Minh, Quoc, Cuong Nguyen, Nichyporuk, Brennan, Oliveira, Bruno, Owen, David, Pal, Jimut Bahan, Pan, Junwen, Pan, Wentao, Pang, Winnie, Park, Bogyu, Pawar, Vivek, Pawar, Kamlesh, Peven, Michael, Philipp, Lena, Pieciak, Tomasz, Plotka, Szymon, Plutat, Marcel, Pourakpour, Fattaneh, Preloลพnik, Domen, Punithakumar, Kumaradevan, Qayyum, Abdul, Queirรณs, Sandro, Rahmim, Arman, Razavi, Salar, Ren, Jintao, Rezaei, Mina, Rico, Jonathan Adam, Rieu, ZunHyan, Rink, Markus, Roth, Johannes, Ruiz-Gonzalez, Yusely, Saeed, Numan, Saha, Anindo, Salem, Mostafa, Sanchez-Matilla, Ricardo, Schilling, Kurt, Shao, Wei, Shen, Zhiqiang, Shi, Ruize, Shi, Pengcheng, Sobotka, Daniel, Soulier, Thรฉodore, Fadida, Bella Specktor, Stoyanov, Danail, Mun, Timothy Sum Hon, Sun, Xiaowu, Tao, Rong, Thaler, Franz, Thรฉberge, Antoine, Thielke, Felix, Torres, Helena, Wahid, Kareem A., Wang, Jiacheng, Wang, YiFei, Wang, Wei, Wang, Xiong, Wen, Jianhui, Wen, Ning, Wodzinski, Marek, Wu, Ye, Xia, Fangfang, Xiang, Tianqi, Xiaofei, Chen, Xu, Lizhan, Xue, Tingting, Yang, Yuxuan, Yang, Lin, Yao, Kai, Yao, Huifeng, Yazdani, Amirsaeed, Yip, Michael, Yoo, Hwanseung, Yousefirizi, Fereshteh, Yu, Shunkai, Yu, Lei, Zamora, Jonathan, Zeineldin, Ramy Ashraf, Zeng, Dewen, Zhang, Jianpeng, Zhang, Bokai, Zhang, Jiapeng, Zhang, Fan, Zhang, Huahong, Zhao, Zhongchen, Zhao, Zixuan, Zhao, Jiachen, Zhao, Can, Zheng, Qingshuo, Zhi, Yuheng, Zhou, Ziqi, Zou, Baosheng, Maier-Hein, Klaus, Jรคger, Paul F., Kopp-Schneider, Annette, Maier-Hein, Lena
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
RibSeg v2: A Large-scale Benchmark for Rib Labeling and Anatomical Centerline Extraction
Jin, Liang, Gu, Shixuan, Wei, Donglai, Adhinarta, Jason Ken, Kuang, Kaiming, Zhang, Yongjie Jessica, Pfister, Hanspeter, Ni, Bingbing, Yang, Jiancheng, Li, Ming
Automatic rib labeling and anatomical centerline extraction are common prerequisites for various clinical applications. Prior studies either use in-house datasets that are inaccessible to communities, or focus on rib segmentation that neglects the clinical significance of rib labeling. To address these issues, we extend our prior dataset (RibSeg) on the binary rib segmentation task to a comprehensive benchmark, named RibSeg v2, with 660 CT scans (15,466 individual ribs in total) and annotations manually inspected by experts for rib labeling and anatomical centerline extraction. Based on the RibSeg v2, we develop a pipeline including deep learning-based methods for rib labeling, and a skeletonization-based method for centerline extraction. To improve computational efficiency, we propose a sparse point cloud representation of CT scans and compare it with standard dense voxel grids. Moreover, we design and analyze evaluation metrics to address the key challenges of each task. Our dataset, code, and model are available online to facilitate open research at https://github.com/M3DV/RibSeg
The Impact of ChatGPT and LLMs on Medical Imaging Stakeholders: Perspectives and Use Cases
Yang, Jiancheng, Li, Hongwei Bran, Wei, Donglai
This study investigates the transformative potential of Large Language Models (LLMs), such as OpenAI ChatGPT, in medical imaging. With the aid of public data, these models, which possess remarkable language understanding and generation capabilities, are augmenting the interpretive skills of radiologists, enhancing patient-physician communication, and streamlining clinical workflows. The paper introduces an analytic framework for presenting the complex interactions between LLMs and the broader ecosystem of medical imaging stakeholders, including businesses, insurance entities, governments, research institutions, and hospitals (nicknamed BIGR-H). Through detailed analyses, illustrative use cases, and discussions on the broader implications and future directions, this perspective seeks to raise discussion in strategic planning and decision-making in the era of AI-enabled healthcare.
QuantArt: Quantizing Image Style Transfer Towards High Visual Fidelity
Huang, Siyu, An, Jie, Wei, Donglai, Luo, Jiebo, Pfister, Hanspeter
The mechanism of existing style transfer algorithms is by minimizing a hybrid loss function to push the generated image toward high similarities in both content and style. However, this type of approach cannot guarantee visual fidelity, i.e., the generated artworks should be indistinguishable from real ones. In this paper, we devise a new style transfer framework called QuantArt for high visual-fidelity stylization. QuantArt pushes the latent representation of the generated artwork toward the centroids of the real artwork distribution with vector quantization. By fusing the quantized and continuous latent representations, QuantArt allows flexible control over the generated artworks in terms of content preservation, style similarity, and visual fidelity. Experiments on various style transfer settings show that our QuantArt framework achieves significantly higher visual fidelity compared with the existing style transfer methods.
Why is the winner the best?
Eisenmann, Matthias, Reinke, Annika, Weru, Vivienn, Tizabi, Minu Dietlinde, Isensee, Fabian, Adler, Tim J., Ali, Sharib, Andrearczyk, Vincent, Aubreville, Marc, Baid, Ujjwal, Bakas, Spyridon, Balu, Niranjan, Bano, Sophia, Bernal, Jorge, Bodenstedt, Sebastian, Casella, Alessandro, Cheplygina, Veronika, Daum, Marie, de Bruijne, Marleen, Depeursinge, Adrien, Dorent, Reuben, Egger, Jan, Ellis, David G., Engelhardt, Sandy, Ganz, Melanie, Ghatwary, Noha, Girard, Gabriel, Godau, Patrick, Gupta, Anubha, Hansen, Lasse, Harada, Kanako, Heinrich, Mattias, Heller, Nicholas, Hering, Alessa, Huaulmรฉ, Arnaud, Jannin, Pierre, Kavur, Ali Emre, Kodym, Oldลich, Kozubek, Michal, Li, Jianning, Li, Hongwei, Ma, Jun, Martรญn-Isla, Carlos, Menze, Bjoern, Noble, Alison, Oreiller, Valentin, Padoy, Nicolas, Pati, Sarthak, Payette, Kelly, Rรคdsch, Tim, Rafael-Patiรฑo, Jonathan, Bawa, Vivek Singh, Speidel, Stefanie, Sudre, Carole H., van Wijnen, Kimberlin, Wagner, Martin, Wei, Donglai, Yamlahi, Amine, Yap, Moi Hoon, Yuan, Chun, Zenk, Maximilian, Zia, Aneeq, Zimmerer, David, Aydogan, Dogu Baran, Bhattarai, Binod, Bloch, Louise, Brรผngel, Raphael, Cho, Jihoon, Choi, Chanyeol, Dou, Qi, Ezhov, Ivan, Friedrich, Christoph M., Fuller, Clifton, Gaire, Rebati Raman, Galdran, Adrian, Faura, รlvaro Garcรญa, Grammatikopoulou, Maria, Hong, SeulGi, Jahanifar, Mostafa, Jang, Ikbeom, Kadkhodamohammadi, Abdolrahim, Kang, Inha, Kofler, Florian, Kondo, Satoshi, Kuijf, Hugo, Li, Mingxing, Luu, Minh Huan, Martinฤiฤ, Tomaลพ, Morais, Pedro, Naser, Mohamed A., Oliveira, Bruno, Owen, David, Pang, Subeen, Park, Jinah, Park, Sung-Hong, Pลotka, Szymon, Puybareau, Elodie, Rajpoot, Nasir, Ryu, Kanghyun, Saeed, Numan, Shephard, Adam, Shi, Pengcheng, ล tepec, Dejan, Subedi, Ronast, Tochon, Guillaume, Torres, Helena R., Urien, Helene, Vilaรงa, Joรฃo L., Wahid, Kareem Abdul, Wang, Haojie, Wang, Jiacheng, Wang, Liansheng, Wang, Xiyue, Wiestler, Benedikt, Wodzinski, Marek, Xia, Fangfang, Xie, Juanying, Xiong, Zhiwei, Yang, Sen, Yang, Yanwu, Zhao, Zixuan, Maier-Hein, Klaus, Jรคger, Paul F., Kopp-Schneider, Annette, Maier-Hein, Lena
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The "typical" lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.
The XPRESS Challenge: Xray Projectomic Reconstruction -- Extracting Segmentation with Skeletons
Nguyen, Tri, Narwani, Mukul, Larson, Mark, Li, Yicong, Xie, Shuhan, Pfister, Hanspeter, Wei, Donglai, Shavit, Nir, Mi, Lu, Pacureanu, Alexandra, Lee, Wei-Chung, Kuan, Aaron T.
The wiring and connectivity of neurons form a structural basis for the function of the nervous system. Advances in volume electron microscopy (EM) and image segmentation have enabled mapping of circuit diagrams (connectomics) within local regions of the mouse brain. However, applying volume EM over the whole brain is not currently feasible due to technological challenges. As a result, comprehensive maps of long-range connections between brain regions are lacking. Recently, we demonstrated that X-ray holographic nanotomography (XNH) can provide high-resolution images of brain tissue at a much larger scale than EM. In particular, XNH is wellsuited to resolve large, myelinated axon tracts (white matter) that make up the bulk of long-range connections (projections) and are critical for inter-region communication. Thus, XNH provides an imaging solution for brain-wide projectomics. However, because XNH data is typically collected at lower resolutions and larger fields-of-view than EM, accurate segmentation of XNH images remains an important challenge that we present here. In this task, we provide volumetric XNH images of cortical white matter axons from the mouse brain along with ground truth annotations for axon trajectories. Manual voxel-wise annotation of ground truth is a time-consuming bottleneck for training segmentation networks. On the other hand, skeleton-based ground truth is much faster to annotate, and sufficient to determine connectivity. Therefore, we encourage participants to develop methods to leverage skeleton-based training. To this end, we provide two types of ground-truth annotations: a small volume of voxel-wise annotations and a larger volume with skeleton-based annotations. Entries will be evaluated on how accurately the submitted segmentations agree with the ground-truth skeleton annotations.