Yuan, Yading
FedKBP: Federated dose prediction framework for knowledge-based planning in radiation therapy
Chen, Jingyun, King, Martin, Yuan, Yading
Dose prediction plays a key role in knowledge-based planning (KBP) by automatically generating patient-specific dose distribution. Recent advances in deep learning-based dose prediction methods necessitates collaboration among data contributors for improved performance. Federated learning (FL) has emerged as a solution, enabling medical centers to jointly train deep-learning models without compromising patient data privacy. We developed the FedKBP framework to evaluate the performances of centralized, federated, and individual (i.e. separated) training of dose prediction model on the 340 plans from OpenKBP dataset. To simulate FL and individual training, we divided the data into 8 training sites. To evaluate the effect of inter-site data variation on model training, we implemented two types of case distributions: 1) Independent and identically distributed (IID), where the training and validating cases were evenly divided among the 8 sites, and 2) non-IID, where some sites have more cases than others. The results show FL consistently outperforms individual training on both model optimization speed and out-of-sample testing scores, highlighting the advantage of FL over individual training. Under IID data division, FL shows comparable performance to centralized training, underscoring FL as a promising alternative to traditional pooled-data training. Under non-IID division, larger sites outperformed smaller sites by up to 19% on testing scores, confirming the need of collaboration among data owners to achieve better prediction accuracy. Meanwhile, non-IID FL showed reduced performance as compared to IID FL, posing the need for more sophisticated FL method beyond mere model averaging to handle data variation among participating sites.
Decentralized Gossip Mutual Learning (GML) for brain tumor segmentation on multi-parametric MRI
Chen, Jingyun, Yuan, Yading
Federated Learning (FL) enables collaborative model training among medical centers without sharing private data. However, traditional FL risks on server failures and suboptimal performance on local data due to the nature of centralized model aggregation. To address these issues, we present Gossip Mutual Learning (GML), a decentralized framework that uses Gossip Protocol for direct peer-to-peer communication. In addition, GML encourages each site to optimize its local model through mutual learning to account for data variations among different sites. For the task of tumor segmentation using 146 cases from four clinical sites in BraTS 2021 dataset, we demonstrated GML outperformed local models and achieved similar performance as FedAvg with only 25% communication overhead.
Decentralized Gossip Mutual Learning (GML) for automatic head and neck tumor segmentation
Chen, Jingyun, Yuan, Yading
Federated learning (FL) has emerged as a promising strategy for collaboratively training complicated machine learning models from different medical centers without the need of data sharing. However, the traditional FL relies on a central server to orchestrate the global model training among clients. This makes it vulnerable to the failure of the model server. Meanwhile, the model trained based on the global data property may not yield the best performance on the local data of a particular site due to the variations of data characteristics among them. To address these limitations, we proposed Gossip Mutual Learning(GML), a decentralized collaborative learning framework that employs Gossip Protocol for direct peer-to-peer communication and encourages each site to optimize its local model by leveraging useful information from peers through mutual learning. On the task of tumor segmentation on PET/CT images using HECKTOR21 dataset with 223 cases from five clinical sites, we demonstrated GML could improve tumor segmentation performance in terms of Dice Similarity Coefficient (DSC) by 3.2%, 4.6% and 10.4% on site-specific testing cases as compared to three baseline methods: pooled training, FedAvg and individual training, respectively. We also showed GML has comparable generalization performance as pooled training and FedAvg when applying them on 78 cases from two out-of-sample sites where no case was used for model training. In our experimental setup, GML showcased a sixfold decrease in communication overhead compared to FedAvg, requiring only 16.67% of the total communication overhead.
Federated Learning Enables Big Data for Rare Cancer Boundary Detection
Pati, Sarthak, Baid, Ujjwal, Edwards, Brandon, Sheller, Micah, Wang, Shih-Han, Reina, G Anthony, Foley, Patrick, Gruzdev, Alexey, Karkada, Deepthi, Davatzikos, Christos, Sako, Chiharu, Ghodasara, Satyam, Bilello, Michel, Mohan, Suyash, Vollmuth, Philipp, Brugnara, Gianluca, Preetha, Chandrakanth J, Sahm, Felix, Maier-Hein, Klaus, Zenk, Maximilian, Bendszus, Martin, Wick, Wolfgang, Calabrese, Evan, Rudie, Jeffrey, Villanueva-Meyer, Javier, Cha, Soonmee, Ingalhalikar, Madhura, Jadhav, Manali, Pandey, Umang, Saini, Jitender, Garrett, John, Larson, Matthew, Jeraj, Robert, Currie, Stuart, Frood, Russell, Fatania, Kavi, Huang, Raymond Y, Chang, Ken, Balana, Carmen, Capellades, Jaume, Puig, Josep, Trenkler, Johannes, Pichler, Josef, Necker, Georg, Haunschmidt, Andreas, Meckel, Stephan, Shukla, Gaurav, Liem, Spencer, Alexander, Gregory S, Lombardo, Joseph, Palmer, Joshua D, Flanders, Adam E, Dicker, Adam P, Sair, Haris I, Jones, Craig K, Venkataraman, Archana, Jiang, Meirui, So, Tiffany Y, Chen, Cheng, Heng, Pheng Ann, Dou, Qi, Kozubek, Michal, Lux, Filip, Michálek, Jan, Matula, Petr, Keřkovský, Miloš, Kopřivová, Tereza, Dostál, Marek, Vybíhal, Václav, Vogelbaum, Michael A, Mitchell, J Ross, Farinhas, Joaquim, Maldjian, Joseph A, Yogananda, Chandan Ganesh Bangalore, Pinho, Marco C, Reddy, Divya, Holcomb, James, Wagner, Benjamin C, Ellingson, Benjamin M, Cloughesy, Timothy F, Raymond, Catalina, Oughourlian, Talia, Hagiwara, Akifumi, Wang, Chencai, To, Minh-Son, Bhardwaj, Sargam, Chong, Chee, Agzarian, Marc, Falcão, Alexandre Xavier, Martins, Samuel B, Teixeira, Bernardo C A, Sprenger, Flávia, Menotti, David, Lucio, Diego R, LaMontagne, Pamela, Marcus, Daniel, Wiestler, Benedikt, Kofler, Florian, Ezhov, Ivan, Metz, Marie, Jain, Rajan, Lee, Matthew, Lui, Yvonne W, McKinley, Richard, Slotboom, Johannes, Radojewski, Piotr, Meier, Raphael, Wiest, Roland, Murcia, Derrick, Fu, Eric, Haas, Rourke, Thompson, John, Ormond, David Ryan, Badve, Chaitra, Sloan, Andrew E, Vadmal, Vachan, Waite, Kristin, Colen, Rivka R, Pei, Linmin, Ak, Murat, Srinivasan, Ashok, Bapuraj, J Rajiv, Rao, Arvind, Wang, Nicholas, Yoshiaki, Ota, Moritani, Toshio, Turk, Sevcan, Lee, Joonsang, Prabhudesai, Snehal, Morón, Fanny, Mandel, Jacob, Kamnitsas, Konstantinos, Glocker, Ben, Dixon, Luke V M, Williams, Matthew, Zampakis, Peter, Panagiotopoulos, Vasileios, Tsiganos, Panagiotis, Alexiou, Sotiris, Haliassos, Ilias, Zacharaki, Evangelia I, Moustakas, Konstantinos, Kalogeropoulou, Christina, Kardamakis, Dimitrios M, Choi, Yoon Seong, Lee, Seung-Koo, Chang, Jong Hee, Ahn, Sung Soo, Luo, Bing, Poisson, Laila, Wen, Ning, Tiwari, Pallavi, Verma, Ruchika, Bareja, Rohan, Yadav, Ipsa, Chen, Jonathan, Kumar, Neeraj, Smits, Marion, van der Voort, Sebastian R, Alafandi, Ahmed, Incekara, Fatih, Wijnenga, Maarten MJ, Kapsas, Georgios, Gahrmann, Renske, Schouten, Joost W, Dubbink, Hendrikus J, Vincent, Arnaud JPE, Bent, Martin J van den, French, Pim J, Klein, Stefan, Yuan, Yading, Sharma, Sonam, Tseng, Tzu-Chi, Adabi, Saba, Niclou, Simone P, Keunen, Olivier, Hau, Ann-Christin, Vallières, Martin, Fortin, David, Lepage, Martin, Landman, Bennett, Ramadass, Karthik, Xu, Kaiwen, Chotai, Silky, Chambless, Lola B, Mistry, Akshitkumar, Thompson, Reid C, Gusev, Yuriy, Bhuvaneshwar, Krithika, Sayah, Anousheh, Bencheqroun, Camelia, Belouali, Anas, Madhavan, Subha, Booth, Thomas C, Chelliah, Alysha, Modat, Marc, Shuaib, Haris, Dragos, Carmen, Abayazeed, Aly, Kolodziej, Kenneth, Hill, Michael, Abbassy, Ahmed, Gamal, Shady, Mekhaimar, Mahmoud, Qayati, Mohamed, Reyes, Mauricio, Park, Ji Eun, Yun, Jihye, Kim, Ho Sung, Mahajan, Abhishek, Muzi, Mark, Benson, Sean, Beets-Tan, Regina G H, Teuwen, Jonas, Herrera-Trujillo, Alejandro, Trujillo, Maria, Escobar, William, Abello, Ana, Bernal, Jose, Gómez, Jhon, Choi, Joseph, Baek, Stephen, Kim, Yusung, Ismael, Heba, Allen, Bryan, Buatti, John M, Kotrotsou, Aikaterini, Li, Hongwei, Weiss, Tobias, Weller, Michael, Bink, Andrea, Pouymayou, Bertrand, Shaykh, Hassan F, Saltz, Joel, Prasanna, Prateek, Shrestha, Sampurna, Mani, Kartik M, Payne, David, Kurc, Tahsin, Pelaez, Enrique, Franco-Maldonado, Heydy, Loayza, Francis, Quevedo, Sebastian, Guevara, Pamela, Torche, Esteban, Mendoza, Cristobal, Vera, Franco, Ríos, Elvis, López, Eduardo, Velastin, Sergio A, Ogbole, Godwin, Oyekunle, Dotun, Odafe-Oyibotha, Olubunmi, Osobu, Babatunde, Shu'aibu, Mustapha, Dorcas, Adeleye, Soneye, Mayowa, Dako, Farouk, Simpson, Amber L, Hamghalam, Mohammad, Peoples, Jacob J, Hu, Ricky, Tran, Anh, Cutler, Danielle, Moraes, Fabio Y, Boss, Michael A, Gimpel, James, Veettil, Deepak Kattil, Schmidt, Kendall, Bialecki, Brian, Marella, Sailaja, Price, Cynthia, Cimino, Lisa, Apgar, Charles, Shah, Prashant, Menze, Bjoern, Barnholtz-Sloan, Jill S, Martin, Jason, Bakas, Spyridon
Although machine learning (ML) has shown promise in numerous domains, there are concerns about generalizability to out-of-sample data. This is currently addressed by centrally sharing ample, and importantly diverse, data from multiple sites. However, such centralization is challenging to scale (or even not feasible) due to various limitations. Federated ML (FL) provides an alternative to train accurate and generalizable ML models, by only sharing numerical model updates. Here we present findings from the largest FL study to-date, involving data from 71 healthcare institutions across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, utilizing the largest dataset of such patients ever used in the literature (25, 256 MRI scans from 6, 314 patients). We demonstrate a 33% improvement over a publicly trained model to delineate the surgically targetable tumor, and 23% improvement over the tumor's entire extent. We anticipate our study to: 1) enable more studies in healthcare informed by large and diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further quantitative analyses for glioblastoma via performance optimization of our consensus model for eventual public release, and 3) demonstrate the effectiveness of FL at such scale and task complexity as a paradigm shift for multi-site collaborations, alleviating the need for data sharing.