tumor size
Mechanistic Learning with Guided Diffusion Models to Predict Spatio-Temporal Brain Tumor Growth
Laslo, Daria, Georgiou, Efthymios, Linguraru, Marius George, Rauschecker, Andreas, Muller, Sabine, Jutzeler, Catherine R., Bruningk, Sarah
Predicting the spatio-temporal progression of brain tumors is essential for guiding clinical decisions in neuro-oncology. We propose a hybrid mechanistic learning framework that combines a mathematical tumor growth model with a guided denoising diffusion implicit model (DDIM) to synthesize anatomically feasible future MRIs from preceding scans. The mechanistic model, formulated as a system of ordinary differential equations, captures temporal tumor dynamics including radiotherapy effects and estimates future tumor burden. These estimates condition a gradient-guided DDIM, enabling image synthesis that aligns with both predicted growth and patient anatomy. We train our model on the BraTS adult and pediatric glioma datasets and evaluate on 60 axial slices of in-house longitudinal pediatric diffuse midline glioma (DMG) cases. Our framework generates realistic follow-up scans based on spatial similarity metrics. It also introduces tumor growth probability maps, which capture both clinically relevant extent and directionality of tumor growth as shown by 95th percentile Hausdorff Distance. The method enables biologically informed image generation in data-limited scenarios, offering generative-space-time predictions that account for mechanistic priors.
- North America > United States > California > San Francisco County > San Francisco (0.28)
- Europe > Switzerland > Zürich > Zürich (0.15)
- North America > United States > District of Columbia > Washington (0.04)
- (2 more...)
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.46)
Conditional Diffusion Models Based Conditional Independence Testing
Yang, Yanfeng, Li, Shuai, Zhang, Yingjie, Sun, Zhuoran, Shu, Hai, Chen, Ziqi, Zhang, Renming
Conditional independence (CI) testing is a fundamental task in modern statistics and machine learning. The conditional randomization test (CRT) was recently introduced to test whether two random variables, $X$ and $Y$, are conditionally independent given a potentially high-dimensional set of random variables, $Z$. The CRT operates exceptionally well under the assumption that the conditional distribution $X|Z$ is known. However, since this distribution is typically unknown in practice, accurately approximating it becomes crucial. In this paper, we propose using conditional diffusion models (CDMs) to learn the distribution of $X|Z$. Theoretically and empirically, it is shown that CDMs closely approximate the true conditional distribution. Furthermore, CDMs offer a more accurate approximation of $X|Z$ compared to GANs, potentially leading to a CRT that performs better than those based on GANs. To accommodate complex dependency structures, we utilize a computationally efficient classifier-based conditional mutual information (CMI) estimator as our test statistic. The proposed testing procedure performs effectively without requiring assumptions about specific distribution forms or feature dependencies, and is capable of handling mixed-type conditioning sets that include both continuous and discrete variables. Theoretical analysis shows that our proposed test achieves a valid control of the type I error. A series of experiments on synthetic data demonstrates that our new test effectively controls both type-I and type-II errors, even in high dimensional scenarios.
- North America > United States > New York (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
Predicting Breast Cancer Survival: A Survival Analysis Approach Using Log Odds and Clinical Variables
Alamu, Opeyemi Sheu, Choque, Bismar Jorge Gutierrez, Rizvi, Syed Wajeeh Abbs, Hammed, Samah Badr, Medani, Isameldin Elamin, Siam, Md Kamrul, Tahir, Waqar Ahmad
Breast cancer remains a significant global health challenge, with prognosis and treatment decisions largely dependent on clinical characteristics. Accurate prediction of patient outcomes is crucial for personalized treatment strategies. This study employs survival analysis techniques, including Cox proportional hazards and parametric survival models, to enhance the prediction of the log odds of survival in breast cancer patients. Clinical variables such as tumor size, hormone receptor status, HER2 status, age, and treatment history were analyzed to assess their impact on survival outcomes. Data from 1557 breast cancer patients were obtained from a publicly available dataset provided by the University College Hospital, Ibadan, Nigeria. This dataset was preprocessed and analyzed using both univariate and multivariate approaches to evaluate survival outcomes. Kaplan-Meier survival curves were generated to visualize survival probabilities, while the Cox proportional hazards model identified key risk factors influencing mortality. The results showed that older age, larger tumor size, and HER2-positive status were significantly associated with an increased risk of mortality. In contrast, estrogen receptor positivity and breast-conserving surgery were linked to better survival outcomes. The findings suggest that integrating these clinical variables into predictive models improvesthe accuracy of survival predictions, helping to identify high-risk patients who may benefit from more aggressive interventions. This study demonstrates the potential of survival analysis in optimizing breast cancer care, particularly in resource-limited settings. Future research should focus on integrating genomic data and real-world clinical outcomes to further refine these models.
- Africa > Nigeria > Oyo State > Ibadan (0.24)
- North America > United States > New York (0.04)
- Africa > Sudan > Khartoum State > Khartoum (0.04)
- (7 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
10 Datasets from Kaggle You Should Practice On to Improve Your Data Science Skills
Kaggle is a website where you can find competitions to solve data science problems. It's free to join and it gives you the opportunity to practice your skills on real-world datasets in various industries. This post will introduce 10 datasets that are great for practicing your skills before heading into an interview or just because they're interesting! The Titanic dataset is probably one of the most popular datasets on Kaggle. It's a great dataset to start with because it has a lot of Variables (13) and Records (over 1500).
- North America > United States > Wisconsin (0.06)
- North America > United States > District of Columbia > Washington (0.05)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.48)
- Health & Medicine > Therapeutic Area > Oncology (0.31)
Regression Trees
This Blog assumes that the reader is familiar with the concept of Decision Trees and Regression. If not, refer to the blogs below. Having read the above blogs or Having already being familiar with the appropriate topics, you hopefully understand what is a decision tree, by now ( The one we used for classification task). A regression tree is basically a decision tree that is used for the task of regression which can be used to predict continuous valued outputs instead of discrete outputs. In Decision Trees for Classification, we saw how the tree asks right questions at the right node in order to give accurate and efficient classifications.
The Confusion Behind Logistic Regression
Logistic Regression is the most confusing unsupervised Machine learning Algorithm. As it is a Classification type Algorithm, the word Regression in its naming often tends to confuse the newbies in the Data Science world. We know that Linear Regression is used to predict the output, which consists of continuous values. However, imagine a situation if we want to classify our continuous outputs into various classes. E.g., if we are data of marks of students and based on the percentage output, we want to classify whether it's pass or fail.
- Research Report > New Finding (0.64)
- Research Report > Experimental Study (0.64)
Variable Selection with Random Survival Forest and Bayesian Additive Regression Tree for Survival Data
Saha, Satabdi, Ryu, Duchwan, Ebrahimi, Nader
In this paper we utilize a survival analysis methodology incorporating Bayesian additive regression trees to account for nonlinear and additive covariate effects. We compare the performance of Bayesian additive regression trees, Cox proportional hazards and random survival forests models for censored survival data, using simulation studies and survival analysis for breast cancer with U.S. SEER database for the year 2005. In simulation studies, we compare the three models across varying sample sizes and censoring rates on the basis of bias and prediction accuracy. In survival analysis for breast cancer, we retrospectively analyze a subset of 1500 patients having invasive ductal carcinoma that is a common form of breast cancer mostly affecting older woman. Predictive potential of the three models are then compared using some widely used performance assessment measures in survival literature.
- North America > United States > Illinois > DeKalb County > DeKalb (0.04)
- North America > United States > Michigan > Ingham County > Lansing (0.04)
- North America > United States > Michigan > Ingham County > East Lansing (0.04)
- Research Report > Experimental Study (0.88)
- Research Report > New Finding (0.68)
Artificial intelligence model "learns" from patient data to make cancer treatment less toxic
MIT researchers are employing novel machine-learning techniques to improve the quality of life for patients by reducing toxic chemotherapy and radiotherapy dosing for glioblastoma, the most aggressive form of brain cancer. Glioblastoma is a malignant tumor that appears in the brain or spinal cord, and prognosis for adults is no more than five years. Patients must endure a combination of radiation therapy and multiple drugs taken every month. Medical professionals generally administer maximum safe drug doses to shrink the tumor as much as possible. But these strong pharmaceuticals still cause debilitating side effects in patients.
AI can make sure cancer patients get just enough (but not too much) treatment
Patients with glioblastoma, a malignant tumor in the brain or spinal cord, typically live no more than five years after receiving their diagnosis. And those five years can be painful -- in an effort to minimize the tumor, doctors often prescribe a combination of radiation therapy and drugs that can cause debilitating side effects for patients. Now, researchers from MIT Media Lab have developed artificial intelligence (AI) that can determine the minimum drug doses needed to effectively shrink glioblastoma patients' tumors. They plan to present their research at Stanford University's 2018 Machine Learning for Healthcare conference. To create an AI that could determine the best dosing regimen for glioblastoma patients, the MIT researchers turned to a training technique known as reinforcement learning (RL). First, they created a testing group of 50 simulated glioblastoma patients based on a large dataset of those that had previously undergone treatment for their disease.
- Health & Medicine > Therapeutic Area > Oncology > Childhood Cancer (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Brain Cancer (1.00)
A.I Model Makes Cancer Treatments Less Toxic For Patients
MIT researchers developed a machine-learning technique that reduces toxic chemotherapy and radiotherapy dosing for the most aggressive form of brain cancer, thus improving the quality of life of the patients. Glioblastoma is a malignant tumor that is so violent, the prognosis for adults stands at no more than five years. It appears in the brain or the spinal cord and once it sets in, the patients try to fight it by using a combination of radiation therapy and various drugs, every month. The strong medication they use still cause serious side effects and even so, medical professionals still administer the maximum safe dose they are allowed. Enter MIT, who built an AI model that uses a technique called Reinforced Learning (RL), in which a model learns to favor a certain behaviour that eventually leads to a desired outcome.