Not enough data to create a plot.
Try a different view from the menu above.
Patel, Shalin
BetaExplainer: A Probabilistic Method to Explain Graph Neural Networks
Sloneker, Whitney, Patel, Shalin, Wang, Michael, Crawford, Lorin, Singh, Ritambhara
Relational data occur in a variety of domains, such as social graphs [25], chemical structures [17], physical systems [25], gene-gene interactions [25], and epidemiological modeling [8]. These data are best represented by graphs that effectively model their relationships, such as chemical bonds in drug molecules that affect toxicity or treatment efficacy [25] or personal interactions in social networks indicating contact [17]. Although graph information represents these datasets more accurately by incorporating node features (i.e., chemical weight for molecules) and node interactions through edges (i.e., chemical bonds) [25], large-scale modeling to learn their patterns can be challenging if the graphs are complex [6, 22]. Embedding methods such as Graphlets[12] and DeepWalk[10] have been developed to address these challenges.
An efficient deep neural network to find small objects in large 3D images
Park, Jungkyu, Chłędowski, Jakub, Jastrzębski, Stanisław, Witowski, Jan, Xu, Yanqi, Du, Linda, Gaddam, Sushma, Kim, Eric, Lewin, Alana, Parikh, Ujas, Plaunova, Anastasia, Chen, Sardius, Millet, Alexandra, Park, James, Pysarenko, Kristine, Patel, Shalin, Goldberg, Julia, Wegener, Melanie, Moy, Linda, Heacock, Laura, Reig, Beatriu, Geras, Krzysztof J.
3D imaging enables accurate diagnosis by providing spatial information about organ anatomy. However, using 3D images to train AI models is computationally challenging because they consist of 10x or 100x more pixels than their 2D counterparts. To be trained with high-resolution 3D images, convolutional neural networks resort to downsampling them or projecting them to 2D. We propose an effective alternative, a neural network that enables efficient classification of full-resolution 3D medical images. Compared to off-the-shelf convolutional neural networks, our network, 3D Globally-Aware Multiple Instance Classifier (3D-GMIC), uses 77.98%-90.05% less GPU memory and 91.23%-96.02% less computation. While it is trained only with image-level labels, without segmentation labels, it explains its predictions by providing pixel-level saliency maps. On a dataset collected at NYU Langone Health, including 85,526 patients with full-field 2D mammography (FFDM), synthetic 2D mammography, and 3D mammography, 3D-GMIC achieves an AUC of 0.831 (95% CI: 0.769-0.887) in classifying breasts with malignant findings using 3D mammography. This is comparable to the performance of GMIC on FFDM (0.816, 95% CI: 0.737-0.878) and synthetic 2D (0.826, 95% CI: 0.754-0.884), which demonstrates that 3D-GMIC successfully classified large 3D images despite focusing computation on a smaller percentage of its input compared to GMIC. Therefore, 3D-GMIC identifies and utilizes extremely small regions of interest from 3D images consisting of hundreds of millions of pixels, dramatically reducing associated computational challenges. 3D-GMIC generalizes well to BCS-DBT, an external dataset from Duke University Hospital, achieving an AUC of 0.848 (95% CI: 0.798-0.896).