Li, Hongwei Bran
MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning
Pan, Jiazhen, Liu, Che, Wu, Junde, Liu, Fenglin, Zhu, Jiayuan, Li, Hongwei Bran, Chen, Chen, Ouyang, Cheng, Rueckert, Daniel
Reasoning is a critical frontier for advancing medical image analysis, where transparency and trustworthiness play a central role in both clinician trust and regulatory approval. Although Medical Visual Language Models (VLMs) show promise for radiological tasks, most existing VLMs merely produce final answers without revealing the underlying reasoning. To address this gap, we introduce MedVLM-R1, a medical VLM that explicitly generates natural language reasoning to enhance transparency and trustworthiness. Instead of relying on supervised fine-tuning (SFT), which often suffers from overfitting to training distributions and fails to foster genuine reasoning, MedVLM-R1 employs a reinforcement learning framework that incentivizes the model to discover human-interpretable reasoning paths without using any reasoning references. Despite limited training data (600 visual question answering samples) and model parameters (2B), MedVLM-R1 boosts accuracy from 55.11% to 78.22% across MRI, CT, and X-ray benchmarks, outperforming larger models trained on over a million samples. It also demonstrates robust domain generalization under out-of-distribution tasks. By unifying medical image analysis with explicit reasoning, MedVLM-R1 marks a pivotal step toward trustworthy and interpretable AI in clinical practice. Inference model is available at: https://huggingface.co/JZPeterPan/ MedVLM-R1.
Brain Tumor Segmentation (BraTS) Challenge 2024: Meningioma Radiotherapy Planning Automated Segmentation
LaBella, Dominic, Schumacher, Katherine, Mix, Michael, Leu, Kevin, McBurney-Lin, Shan, Nedelec, Pierre, Villanueva-Meyer, Javier, Shapey, Jonathan, Vercauteren, Tom, Chia, Kazumi, Al-Salihi, Omar, Leu, Justin, Halasz, Lia, Velichko, Yury, Wang, Chunhao, Kirkpatrick, John, Floyd, Scott, Reitman, Zachary J., Mullikin, Trey, Bagci, Ulas, Sachdev, Sean, Hattangadi-Gluth, Jona A., Seibert, Tyler, Farid, Nikdokht, Puett, Connor, Pease, Matthew W., Shiue, Kevin, Anwar, Syed Muhammad, Faghani, Shahriar, Haider, Muhammad Ammar, Warman, Pranav, Albrecht, Jake, Jakab, Andrรกs, Moassefi, Mana, Chung, Verena, Aristizabal, Alejandro, Karargyris, Alexandros, Kassem, Hasan, Pati, Sarthak, Sheller, Micah, Huang, Christina, Coley, Aaron, Ghanta, Siddharth, Schneider, Alex, Sharp, Conrad, Saluja, Rachit, Kofler, Florian, Lohmann, Philipp, Vollmuth, Phillipp, Gagnon, Louis, Adewole, Maruf, Li, Hongwei Bran, Kazerooni, Anahita Fathi, Tahon, Nourel Hoda, Anazodo, Udunna, Moawad, Ahmed W., Menze, Bjoern, Linguraru, Marius George, Aboian, Mariam, Wiestler, Benedikt, Baid, Ujjwal, Conte, Gian-Marco, Rauschecker, Andreas M. T., Nada, Ayman, Abayazeed, Aly H., Huang, Raymond, de Verdier, Maria Correia, Rudie, Jeffrey D., Bakas, Spyridon, Calabrese, Evan
The 2024 Brain Tumor Segmentation Meningioma Radiotherapy (BraTS-MEN-RT) challenge aims to advance automated segmentation algorithms using the largest known multi-institutional dataset of radiotherapy planning brain MRIs with expert-annotated target labels for patients with intact or post-operative meningioma that underwent either conventional external beam radiotherapy or stereotactic radiosurgery. Each case includes a defaced 3D post-contrast T1-weighted radiotherapy planning MRI in its native acquisition space, accompanied by a single-label "target volume" representing the gross tumor volume (GTV) and any at-risk post-operative site. Target volume annotations adhere to established radiotherapy planning protocols, ensuring consistency across cases and institutions. For pre-operative meningiomas, the target volume encompasses the entire GTV and associated nodular dural tail, while for post-operative cases, it includes at-risk resection cavity margins as determined by the treating institution. Case annotations were reviewed and approved by expert neuroradiologists and radiation oncologists. Participating teams will develop, containerize, and evaluate automated segmentation models using this comprehensive dataset. Model performance will be assessed using the lesion-wise Dice Similarity Coefficient and the 95% Hausdorff distance. The top-performing teams will be recognized at the Medical Image Computing and Computer Assisted Intervention Conference in October 2024. BraTS-MEN-RT is expected to significantly advance automated radiotherapy planning by enabling precise tumor segmentation and facilitating tailored treatment, ultimately improving patient outcomes.
Probabilistic Contrastive Learning with Explicit Concentration on the Hypersphere
Li, Hongwei Bran, Ouyang, Cheng, Amiranashvili, Tamaz, Rosen, Matthew S., Menze, Bjoern, Iglesias, Juan Eugenio
Self-supervised contrastive learning has predominantly adopted deterministic methods, which are not suited for environments characterized by uncertainty and noise. This paper introduces a new perspective on incorporating uncertainty into contrastive learning by embedding representations within a spherical space, inspired by the von Mises-Fisher distribution (vMF). We introduce an unnormalized form of vMF and leverage the concentration parameter, kappa, as a direct, interpretable measure to quantify uncertainty explicitly. This approach not only provides a probabilistic interpretation of the embedding space but also offers a method to calibrate model confidence against varying levels of data corruption and characteristics. Our empirical results demonstrate that the estimated concentration parameter correlates strongly with the degree of unforeseen data corruption encountered at test time, enables failure analysis, and enhances existing out-of-distribution detection methods.
Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge
LaBella, Dominic, Baid, Ujjwal, Khanna, Omaditya, McBurney-Lin, Shan, McLean, Ryan, Nedelec, Pierre, Rashid, Arif, Tahon, Nourel Hoda, Altes, Talissa, Bhalerao, Radhika, Dhemesh, Yaseen, Godfrey, Devon, Hilal, Fathi, Floyd, Scott, Janas, Anastasia, Kazerooni, Anahita Fathi, Kirkpatrick, John, Kent, Collin, Kofler, Florian, Leu, Kevin, Maleki, Nazanin, Menze, Bjoern, Pajot, Maxence, Reitman, Zachary J., Rudie, Jeffrey D., Saluja, Rachit, Velichko, Yury, Wang, Chunhao, Warman, Pranav, Adewole, Maruf, Albrecht, Jake, Anazodo, Udunna, Anwar, Syed Muhammad, Bergquist, Timothy, Chen, Sully Francis, Chung, Verena, Conte, Gian-Marco, Dako, Farouk, Eddy, James, Ezhov, Ivan, Khalili, Nastaran, Iglesias, Juan Eugenio, Jiang, Zhifan, Johanson, Elaine, Van Leemput, Koen, Li, Hongwei Bran, Linguraru, Marius George, Liu, Xinyang, Mahtabfar, Aria, Meier, Zeke, Moawad, Ahmed W., Mongan, John, Piraud, Marie, Shinohara, Russell Takeshi, Wiggins, Walter F., Abayazeed, Aly H., Akinola, Rachel, Jakab, Andrรกs, Bilello, Michel, de Verdier, Maria Correia, Crivellaro, Priscila, Davatzikos, Christos, Farahani, Keyvan, Freymann, John, Hess, Christopher, Huang, Raymond, Lohmann, Philipp, Moassefi, Mana, Pease, Matthew W., Vollmuth, Phillipp, Sollmann, Nico, Diffley, David, Nandolia, Khanak K., Warren, Daniel I., Hussain, Ali, Fehringer, Pascal, Bronstein, Yulia, Deptula, Lisa, Stein, Evan G., Taherzadeh, Mahsa, de Oliveira, Eduardo Portela, Haughey, Aoife, Kontzialis, Marinos, Saba, Luca, Turner, Benjamin, Brรผรeler, Melanie M. T., Ansari, Shehbaz, Gkampenis, Athanasios, Weiss, David Maximilian, Mansour, Aya, Shawali, Islam H., Yordanov, Nikolay, Stein, Joel M., Hourani, Roula, Moshebah, Mohammed Yahya, Abouelatta, Ahmed Magdy, Rizvi, Tanvir, Willms, Klara, Martin, Dann C., Okar, Abdullah, D'Anna, Gennaro, Taha, Ahmed, Sharifi, Yasaman, Faghani, Shahriar, Kite, Dominic, Pinho, Marco, Haider, Muhammad Ammar, Aristizabal, Alejandro, Karargyris, Alexandros, Kassem, Hasan, Pati, Sarthak, Sheller, Micah, Alonso-Basanta, Michelle, Villanueva-Meyer, Javier, Rauschecker, Andreas M., Nada, Ayman, Aboian, Mariam, Flanders, Adam E., Wiestler, Benedikt, Bakas, Spyridon, Calabrese, Evan
We describe the design and results from the BraTS 2023 Intracranial Meningioma Segmentation Challenge. The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas, which are typically benign extra-axial tumors with diverse radiologic and anatomical presentation and a propensity for multiplicity. Nine participating teams each developed deep-learning automated segmentation models using image data from the largest multi-institutional systematically expert annotated multilabel multi-sequence meningioma MRI dataset to date, which included 1000 training set cases, 141 validation set cases, and 283 hidden test set cases. Each case included T2, T2/FLAIR, T1, and T1Gd brain MRI sequences with associated tumor compartment labels delineating enhancing tumor, non-enhancing tumor, and surrounding non-enhancing T2/FLAIR hyperintensity. Participant automated segmentation models were evaluated and ranked based on a scoring system evaluating lesion-wise metrics including dice similarity coefficient (DSC) and 95% Hausdorff Distance. The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor, respectively and a corresponding average DSC of 0.899, 0.904, and 0.871, respectively. These results serve as state-of-the-art benchmarks for future pre-operative meningioma automated segmentation algorithms. Additionally, we found that 1286 of 1424 cases (90.3%) had at least 1 compartment voxel abutting the edge of the skull-stripped image edge, which requires further investigation into optimal pre-processing face anonymization steps.
TopCoW: Benchmarking Topology-Aware Anatomical Segmentation of the Circle of Willis (CoW) for CTA and MRA
Yang, Kaiyuan, Musio, Fabio, Ma, Yihui, Juchler, Norman, Paetzold, Johannes C., Al-Maskari, Rami, Hรถher, Luciano, Li, Hongwei Bran, Hamamci, Ibrahim Ethem, Sekuboyina, Anjany, Shit, Suprosanna, Huang, Houjing, Waldmannstetter, Diana, Kofler, Florian, Navarro, Fernando, Menten, Martin, Ezhov, Ivan, Rueckert, Daniel, Vos, Iris, Ruigrok, Ynte, Velthuis, Birgitta, Kuijf, Hugo, Hรคmmerli, Julien, Wurster, Catherine, Bijlenga, Philippe, Westphal, Laura, Bisschop, Jeroen, Colombo, Elisa, Baazaoui, Hakim, Makmur, Andrew, Hallinan, James, Wiestler, Bene, Kirschke, Jan S., Wiest, Roland, Montagnon, Emmanuel, Letourneau-Guillon, Laurent, Galdran, Adrian, Galati, Francesco, Falcetta, Daniele, Zuluaga, Maria A., Lin, Chaolong, Zhao, Haoran, Zhang, Zehan, Ra, Sinyoung, Hwang, Jongyun, Park, Hyunjin, Chen, Junqiang, Wodzinski, Marek, Mรผller, Henning, Shi, Pengcheng, Liu, Wei, Ma, Ting, Yalรงin, Cansu, Hamadache, Rachika E., Salvi, Joaquim, Llado, Xavier, Estrada, Uma Maria Lal-Trehan, Abramova, Valeriia, Giancardo, Luca, Oliver, Arnau, Liu, Jialu, Huang, Haibin, Cui, Yue, Lin, Zehang, Liu, Yusheng, Zhu, Shunzhi, Patel, Tatsat R., Tutino, Vincent M., Orouskhani, Maysam, Wang, Huayu, Mossa-Basha, Mahmud, Zhu, Chengcheng, Rokuss, Maximilian R., Kirchhoff, Yannick, Disch, Nico, Holzschuh, Julius, Isensee, Fabian, Maier-Hein, Klaus, Sato, Yuki, Hirsch, Sven, Wegener, Susanne, Menze, Bjoern
The Circle of Willis (CoW) is an important network of arteries connecting major circulations of the brain. Its vascular architecture is believed to affect the risk, severity, and clinical outcome of serious neuro-vascular diseases. However, characterizing the highly variable CoW anatomy is still a manual and time-consuming expert task. The CoW is usually imaged by two angiographic imaging modalities, magnetic resonance angiography (MRA) and computed tomography angiography (CTA), but there exist limited public datasets with annotations on CoW anatomy, especially for CTA. Therefore we organized the TopCoW Challenge in 2023 with the release of an annotated CoW dataset. The TopCoW dataset was the first public dataset with voxel-level annotations for thirteen possible CoW vessel components, enabled by virtual-reality (VR) technology. It was also the first large dataset with paired MRA and CTA from the same patients. TopCoW challenge formalized the CoW characterization problem as a multiclass anatomical segmentation task with an emphasis on topological metrics. We invited submissions worldwide for the CoW segmentation task, which attracted over 140 registered participants from four continents. The top performing teams managed to segment many CoW components to Dice scores around 90%, but with lower scores for communicating arteries and rare variants. There were also topological mistakes for predictions with high Dice scores. Additional topological analysis revealed further areas for improvement in detecting certain CoW components and matching CoW variant topology accurately. TopCoW represented a first attempt at benchmarking the CoW anatomical segmentation task for MRA and CTA, both morphologically and topologically.
The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs)
Kazerooni, Anahita Fathi, Khalili, Nastaran, Liu, Xinyang, Haldar, Debanjan, Jiang, Zhifan, Anwar, Syed Muhammed, Albrecht, Jake, Adewole, Maruf, Anazodo, Udunna, Anderson, Hannah, Bagheri, Sina, Baid, Ujjwal, Bergquist, Timothy, Borja, Austin J., Calabrese, Evan, Chung, Verena, Conte, Gian-Marco, Dako, Farouk, Eddy, James, Ezhov, Ivan, Familiar, Ariana, Farahani, Keyvan, Haldar, Shuvanjan, Iglesias, Juan Eugenio, Janas, Anastasia, Johansen, Elaine, Jones, Blaise V, Kofler, Florian, LaBella, Dominic, Lai, Hollie Anne, Van Leemput, Koen, Li, Hongwei Bran, Maleki, Nazanin, McAllister, Aaron S, Meier, Zeke, Menze, Bjoern, Moawad, Ahmed W, Nandolia, Khanak K, Pavaine, Julija, Piraud, Marie, Poussaint, Tina, Prabhu, Sanjay P, Reitman, Zachary, Rodriguez, Andres, Rudie, Jeffrey D, Shaikh, Ibraheem Salman, Shah, Lubdha M., Sheth, Nakul, Shinohara, Russel Taki, Tu, Wenxin, Viswanathan, Karthik, Wang, Chunhao, Ware, Jeffrey B, Wiestler, Benedikt, Wiggins, Walter, Zapaishchykova, Anna, Aboian, Mariam, Bornhorst, Miriam, de Blank, Peter, Deutsch, Michelle, Fouladi, Maryam, Hoffman, Lindsey, Kann, Benjamin, Lazow, Margot, Mikael, Leonie, Nabavizadeh, Ali, Packer, Roger, Resnick, Adam, Rood, Brian, Vossough, Arastoo, Bakas, Spyridon, Linguraru, Marius George
Pediatric tumors of the central nervous system are the most common cause of cancer-related death in children. The five-year survival rate for high-grade gliomas in children is less than 20\%. Due to their rarity, the diagnosis of these entities is often delayed, their treatment is mainly based on historic treatment concepts, and clinical trials require multi-institutional collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a landmark community benchmark event with a successful history of 12 years of resource creation for the segmentation and analysis of adult glioma. Here we present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which represents the first BraTS challenge focused on pediatric brain tumors with data acquired across multiple international consortia dedicated to pediatric neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on benchmarking the development of volumentric segmentation algorithms for pediatric brain glioma through standardized quantitative performance evaluation metrics utilized across the BraTS 2023 cluster of challenges. Models gaining knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training data will be evaluated on separate validation and unseen test mpMRI dataof high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge brings together clinicians and AI/imaging scientists to lead to faster development of automated segmentation techniques that could benefit clinical trials, and ultimately the care of children with brain tumors.
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
Li, Jianning, Zhou, Zongwei, Yang, Jiancheng, Pepe, Antonio, Gsaxner, Christina, Luijten, Gijs, Qu, Chongyu, Zhang, Tiezheng, Chen, Xiaoxi, Li, Wenxuan, Wodzinski, Marek, Friedrich, Paul, Xie, Kangxian, Jin, Yuan, Ambigapathy, Narmada, Nasca, Enrico, Solak, Naida, Melito, Gian Marco, Vu, Viet Duc, Memon, Afaque R., Schlachta, Christopher, De Ribaupierre, Sandrine, Patel, Rajnikant, Eagleson, Roy, Chen, Xiaojun, Mรคchler, Heinrich, Kirschke, Jan Stefan, de la Rosa, Ezequiel, Christ, Patrick Ferdinand, Li, Hongwei Bran, Ellis, David G., Aizenberg, Michele R., Gatidis, Sergios, Kรผstner, Thomas, Shusharina, Nadya, Heller, Nicholas, Andrearczyk, Vincent, Depeursinge, Adrien, Hatt, Mathieu, Sekuboyina, Anjany, Lรถffler, Maximilian, Liebl, Hans, Dorent, Reuben, Vercauteren, Tom, Shapey, Jonathan, Kujawa, Aaron, Cornelissen, Stefan, Langenhuizen, Patrick, Ben-Hamadou, Achraf, Rekik, Ahmed, Pujades, Sergi, Boyer, Edmond, Bolelli, Federico, Grana, Costantino, Lumetti, Luca, Salehi, Hamidreza, Ma, Jun, Zhang, Yao, Gharleghi, Ramtin, Beier, Susann, Sowmya, Arcot, Garza-Villarreal, Eduardo A., Balducci, Thania, Angeles-Valdez, Diego, Souza, Roberto, Rittner, Leticia, Frayne, Richard, Ji, Yuanfeng, Ferrari, Vincenzo, Chatterjee, Soumick, Dubost, Florian, Schreiber, Stefanie, Mattern, Hendrik, Speck, Oliver, Haehn, Daniel, John, Christoph, Nรผrnberger, Andreas, Pedrosa, Joรฃo, Ferreira, Carlos, Aresta, Guilherme, Cunha, Antรณnio, Campilho, Aurรฉlio, Suter, Yannick, Garcia, Jose, Lalande, Alain, Vandenbossche, Vicky, Van Oevelen, Aline, Duquesne, Kate, Mekhzoum, Hamza, Vandemeulebroucke, Jef, Audenaert, Emmanuel, Krebs, Claudia, van Leeuwen, Timo, Vereecke, Evie, Heidemeyer, Hauke, Rรถhrig, Rainer, Hรถlzle, Frank, Badeli, Vahid, Krieger, Kathrin, Gunzer, Matthias, Chen, Jianxu, van Meegdenburg, Timo, Dada, Amin, Balzer, Miriam, Fragemann, Jana, Jonske, Frederic, Rempe, Moritz, Malorodov, Stanislav, Bahnsen, Fin H., Seibold, Constantin, Jaus, Alexander, Marinov, Zdravko, Jaeger, Paul F., Stiefelhagen, Rainer, Santos, Ana Sofia, Lindo, Mariana, Ferreira, Andrรฉ, Alves, Victor, Kamp, Michael, Abourayya, Amr, Nensa, Felix, Hรถrst, Fabian, Brehmer, Alexander, Heine, Lukas, Hanusrichter, Yannik, Weรling, Martin, Dudda, Marcel, Podleska, Lars E., Fink, Matthias A., Keyl, Julius, Tserpes, Konstantinos, Kim, Moon-Sung, Elhabian, Shireen, Lamecker, Hans, Zukiฤ, Dลพenan, Paniagua, Beatriz, Wachinger, Christian, Urschler, Martin, Duong, Luc, Wasserthal, Jakob, Hoyer, Peter F., Basu, Oliver, Maal, Thomas, Witjes, Max J. H., Schiele, Gregor, Chang, Ti-chiun, Ahmadi, Seyed-Ahmad, Luo, Ping, Menze, Bjoern, Reyes, Mauricio, Deserno, Thomas M., Davatzikos, Christos, Puladi, Behrus, Fua, Pascal, Yuille, Alan L., Kleesiek, Jens, Egger, Jan
Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedback
Biomedical image analysis competitions: The state of current participation practice
Eisenmann, Matthias, Reinke, Annika, Weru, Vivienn, Tizabi, Minu Dietlinde, Isensee, Fabian, Adler, Tim J., Godau, Patrick, Cheplygina, Veronika, Kozubek, Michal, Ali, Sharib, Gupta, Anubha, Kybic, Jan, Noble, Alison, de Solรณrzano, Carlos Ortiz, Pachade, Samiksha, Petitjean, Caroline, Sage, Daniel, Wei, Donglai, Wilden, Elizabeth, Alapatt, Deepak, Andrearczyk, Vincent, Baid, Ujjwal, Bakas, Spyridon, Balu, Niranjan, Bano, Sophia, Bawa, Vivek Singh, Bernal, Jorge, Bodenstedt, Sebastian, Casella, Alessandro, Choi, Jinwook, Commowick, Olivier, Daum, Marie, Depeursinge, Adrien, Dorent, Reuben, Egger, Jan, Eichhorn, Hannah, Engelhardt, Sandy, Ganz, Melanie, Girard, Gabriel, Hansen, Lasse, Heinrich, Mattias, Heller, Nicholas, Hering, Alessa, Huaulmรฉ, Arnaud, Kim, Hyunjeong, Landman, Bennett, Li, Hongwei Bran, Li, Jianning, Ma, Jun, Martel, Anne, Martรญn-Isla, Carlos, Menze, Bjoern, Nwoye, Chinedu Innocent, Oreiller, Valentin, Padoy, Nicolas, Pati, Sarthak, Payette, Kelly, Sudre, Carole, van Wijnen, Kimberlin, Vardazaryan, Armine, Vercauteren, Tom, Wagner, Martin, Wang, Chuanbo, Yap, Moi Hoon, Yu, Zeyun, Yuan, Chun, Zenk, Maximilian, Zia, Aneeq, Zimmerer, David, Bao, Rina, Choi, Chanyeol, Cohen, Andrew, Dzyubachyk, Oleh, Galdran, Adrian, Gan, Tianyuan, Guo, Tianqi, Gupta, Pradyumna, Haithami, Mahmood, Ho, Edward, Jang, Ikbeom, Li, Zhili, Luo, Zhengbo, Lux, Filip, Makrogiannis, Sokratis, Mรผller, Dominik, Oh, Young-tack, Pang, Subeen, Pape, Constantin, Polat, Gorkem, Reed, Charlotte Rosalie, Ryu, Kanghyun, Scherr, Tim, Thambawita, Vajira, Wang, Haoyu, Wang, Xinliang, Xu, Kele, Yeh, Hung, Yeo, Doyeob, Yuan, Yixuan, Zeng, Yan, Zhao, Xin, Abbing, Julian, Adam, Jannes, Adluru, Nagesh, Agethen, Niklas, Ahmed, Salman, Khalil, Yasmina Al, Alenyร , Mireia, Alhoniemi, Esa, An, Chengyang, Anwar, Talha, Arega, Tewodros Weldebirhan, Avisdris, Netanell, Aydogan, Dogu Baran, Bai, Yingbin, Calisto, Maria Baldeon, Basaran, Berke Doga, Beetz, Marcel, Bian, Cheng, Bian, Hao, Blansit, Kevin, Bloch, Louise, Bohnsack, Robert, Bosticardo, Sara, Breen, Jack, Brudfors, Mikael, Brรผngel, Raphael, Cabezas, Mariano, Cacciola, Alberto, Chen, Zhiwei, Chen, Yucong, Chen, Daniel Tianming, Cho, Minjeong, Choi, Min-Kook, Xie, Chuantao Xie Chuantao, Cobzas, Dana, Cohen-Adad, Julien, Acero, Jorge Corral, Das, Sujit Kumar, de Oliveira, Marcela, Deng, Hanqiu, Dong, Guiming, Doorenbos, Lars, Efird, Cory, Escalera, Sergio, Fan, Di, Serj, Mehdi Fatan, Fenneteau, Alexandre, Fidon, Lucas, Filipiak, Patryk, Finzel, Renรฉ, Freitas, Nuno R., Friedrich, Christoph M., Fulton, Mitchell, Gaida, Finn, Galati, Francesco, Galazis, Christoforos, Gan, Chang Hee, Gao, Zheyao, Gao, Shengbo, Gazda, Matej, Gerats, Beerend, Getty, Neil, Gibicar, Adam, Gifford, Ryan, Gohil, Sajan, Grammatikopoulou, Maria, Grzech, Daniel, Gรผley, Orhun, Gรผnnemann, Timo, Guo, Chunxu, Guy, Sylvain, Ha, Heonjin, Han, Luyi, Han, Il Song, Hatamizadeh, Ali, He, Tian, Heo, Jimin, Hitziger, Sebastian, Hong, SeulGi, Hong, SeungBum, Huang, Rian, Huang, Ziyan, Huellebrand, Markus, Huschauer, Stephan, Hussain, Mustaffa, Inubushi, Tomoo, Polat, Ece Isik, Jafaritadi, Mojtaba, Jeong, SeongHun, Jian, Bailiang, Jiang, Yuanhong, Jiang, Zhifan, Jin, Yueming, Joshi, Smriti, Kadkhodamohammadi, Abdolrahim, Kamraoui, Reda Abdellah, Kang, Inha, Kang, Junghwa, Karimi, Davood, Khademi, April, Khan, Muhammad Irfan, Khan, Suleiman A., Khantwal, Rishab, Kim, Kwang-Ju, Kline, Timothy, Kondo, Satoshi, Kontio, Elina, Krenzer, Adrian, Kroviakov, Artem, Kuijf, Hugo, Kumar, Satyadwyoom, La Rosa, Francesco, Lad, Abhi, Lee, Doohee, Lee, Minho, Lena, Chiara, Li, Hao, Li, Ling, Li, Xingyu, Liao, Fuyuan, Liao, KuanLun, Oliveira, Arlindo Limede, Lin, Chaonan, Lin, Shan, Linardos, Akis, Linguraru, Marius George, Liu, Han, Liu, Tao, Liu, Di, Liu, Yanling, Lourenรงo-Silva, Joรฃo, Lu, Jingpei, Lu, Jiangshan, Luengo, Imanol, Lund, Christina B., Luu, Huan Minh, Lv, Yi, Lv, Yi, Macar, Uzay, Maechler, Leon, L., Sina Mansour, Marshall, Kenji, Mazher, Moona, McKinley, Richard, Medela, Alfonso, Meissen, Felix, Meng, Mingyuan, Miller, Dylan, Mirjahanmardi, Seyed Hossein, Mishra, Arnab, Mitha, Samir, Mohy-ud-Din, Hassan, Mok, Tony Chi Wing, Murugesan, Gowtham Krishnan, Karthik, Enamundram Naga, Nalawade, Sahil, Nalepa, Jakub, Naser, Mohamed, Nateghi, Ramin, Naveed, Hammad, Nguyen, Quang-Minh, Quoc, Cuong Nguyen, Nichyporuk, Brennan, Oliveira, Bruno, Owen, David, Pal, Jimut Bahan, Pan, Junwen, Pan, Wentao, Pang, Winnie, Park, Bogyu, Pawar, Vivek, Pawar, Kamlesh, Peven, Michael, Philipp, Lena, Pieciak, Tomasz, Plotka, Szymon, Plutat, Marcel, Pourakpour, Fattaneh, Preloลพnik, Domen, Punithakumar, Kumaradevan, Qayyum, Abdul, Queirรณs, Sandro, Rahmim, Arman, Razavi, Salar, Ren, Jintao, Rezaei, Mina, Rico, Jonathan Adam, Rieu, ZunHyan, Rink, Markus, Roth, Johannes, Ruiz-Gonzalez, Yusely, Saeed, Numan, Saha, Anindo, Salem, Mostafa, Sanchez-Matilla, Ricardo, Schilling, Kurt, Shao, Wei, Shen, Zhiqiang, Shi, Ruize, Shi, Pengcheng, Sobotka, Daniel, Soulier, Thรฉodore, Fadida, Bella Specktor, Stoyanov, Danail, Mun, Timothy Sum Hon, Sun, Xiaowu, Tao, Rong, Thaler, Franz, Thรฉberge, Antoine, Thielke, Felix, Torres, Helena, Wahid, Kareem A., Wang, Jiacheng, Wang, YiFei, Wang, Wei, Wang, Xiong, Wen, Jianhui, Wen, Ning, Wodzinski, Marek, Wu, Ye, Xia, Fangfang, Xiang, Tianqi, Xiaofei, Chen, Xu, Lizhan, Xue, Tingting, Yang, Yuxuan, Yang, Lin, Yao, Kai, Yao, Huifeng, Yazdani, Amirsaeed, Yip, Michael, Yoo, Hwanseung, Yousefirizi, Fereshteh, Yu, Shunkai, Yu, Lei, Zamora, Jonathan, Zeineldin, Ramy Ashraf, Zeng, Dewen, Zhang, Jianpeng, Zhang, Bokai, Zhang, Jiapeng, Zhang, Fan, Zhang, Huahong, Zhao, Zhongchen, Zhao, Zixuan, Zhao, Jiachen, Zhao, Can, Zheng, Qingshuo, Zhi, Yuheng, Zhou, Ziqi, Zou, Baosheng, Maier-Hein, Klaus, Jรคger, Paul F., Kopp-Schneider, Annette, Maier-Hein, Lena
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
The Brain Tumor Segmentation (BraTS) Challenge 2023: Local Synthesis of Healthy Brain Tissue via Inpainting
Kofler, Florian, Meissen, Felix, Steinbauer, Felix, Graf, Robert, Oswald, Eva, de da Rosa, Ezequiel, Li, Hongwei Bran, Baid, Ujjwal, Hoelzl, Florian, Turgut, Oezguen, Horvath, Izabela, Waldmannstetter, Diana, Bukas, Christina, Adewole, Maruf, Anwar, Syed Muhammad, Janas, Anastasia, Kazerooni, Anahita Fathi, LaBella, Dominic, Moawad, Ahmed W, Farahani, Keyvan, Eddy, James, Bergquist, Timothy, Chung, Verena, Shinohara, Russell Takeshi, Dako, Farouk, Wiggins, Walter, Reitman, Zachary, Wang, Chunhao, Liu, Xinyang, Jiang, Zhifan, Familiar, Ariana, Conte, Gian-Marco, Johanson, Elaine, Meier, Zeke, Davatzikos, Christos, Freymann, John, Kirby, Justin, Bilello, Michel, Fathallah-Shaykh, Hassan M, Wiest, Roland, Kirschke, Jan, Colen, Rivka R, Kotrotsou, Aikaterini, Lamontagne, Pamela, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Weber, Marc-Andrรฉ, Mahajan, Abhishek, Mohan, Suyash, Mongan, John, Hess, Christopher, Cha, Soonmee, Villanueva-Meyer, Javier, Colak, Errol, Crivellaro, Priscila, Jakab, Andras, Albrecht, Jake, Anazodo, Udunna, Aboian, Mariam, Iglesias, Juan Eugenio, Van Leemput, Koen, Bakas, Spyridon, Rueckert, Daniel, Wiestler, Benedikt, Ezhov, Ivan, Piraud, Marie, Menze, Bjoern
A myriad of algorithms for the automatic analysis of brain MR images is available to support clinicians in their decision-making. For brain tumor patients, the image acquisition time series typically starts with a scan that is already pathological. This poses problems, as many algorithms are designed to analyze healthy brains and provide no guarantees for images featuring lesions. Examples include but are not limited to algorithms for brain anatomy parcellation, tissue segmentation, and brain extraction. To solve this dilemma, we introduce the BraTS 2023 inpainting challenge. Here, the participants' task is to explore inpainting techniques to synthesize healthy brain scans from lesioned ones. The following manuscript contains the task formulation, dataset, and submission procedure. Later it will be updated to summarize the findings of the challenge. The challenge is organized as part of the BraTS 2023 challenge hosted at the MICCAI 2023 conference in Vancouver, Canada.
The Impact of ChatGPT and LLMs on Medical Imaging Stakeholders: Perspectives and Use Cases
Yang, Jiancheng, Li, Hongwei Bran, Wei, Donglai
This study investigates the transformative potential of Large Language Models (LLMs), such as OpenAI ChatGPT, in medical imaging. With the aid of public data, these models, which possess remarkable language understanding and generation capabilities, are augmenting the interpretive skills of radiologists, enhancing patient-physician communication, and streamlining clinical workflows. The paper introduces an analytic framework for presenting the complex interactions between LLMs and the broader ecosystem of medical imaging stakeholders, including businesses, insurance entities, governments, research institutions, and hospitals (nicknamed BIGR-H). Through detailed analyses, illustrative use cases, and discussions on the broader implications and future directions, this perspective seeks to raise discussion in strategic planning and decision-making in the era of AI-enabled healthcare.