Cognitive disorders, including dementia, are increasingly being reported as a complication of the highly contagious SARS-CoV-2 virus that causes Covid-19, researchers behind the recent study at the Cleveland Clinic in Ohio have revealed. "Reports of neurological complications in Covid-19 patients and'long-hauler' patients whose symptoms persist after the infection clears are becoming more common, suggesting that [the virus] may have lasting effects on brain function," said the authors of the study, which was published this week in the journal Alzheimer's Research & Therapy. The researchers' aim was to uncover the mechanisms responsible for brain-associated complications such as delirium and the loss of taste or smell that are often found in novel coronavirus patients. In order to do so, they compared on a molecular level the host genes of Covid-19 and those responsible for some neurological disorders. Having collected the data of both Covid-19 patients and people suffering from Alzheimer's disease, they used artificial intelligence to measure the proximity between them.
Blockwise missing data occurs frequently when we integrate multisource or multimodality data where different sources or modalities contain complementary information. In this paper, we consider a high-dimensional linear regression model with blockwise missing covariates and a partially observed response variable. Under this semi-supervised framework, we propose a computationally efficient estimator for the regression coefficient vector based on carefully constructed unbiased estimating equations and a multiple blockwise imputation procedure, and obtain its rates of convergence. Furthermore, building upon an innovative semi-supervised projected estimating equation technique that intrinsically achieves bias-correction of the initial estimator, we propose nearly unbiased estimators for the individual regression coefficients that are asymptotically normally distributed under mild conditions. By carefully analyzing these debiased estimators, asymptotically valid confidence intervals and statistical tests about each regression coefficient are constructed. Numerical studies and application analysis of the Alzheimer's Disease Neuroimaging Initiative data show that the proposed method performs better and benefits more from unsupervised samples than existing methods.
The world's first Alzheimer's disease (AD) drug candidate designed by artificial intelligence (AI) is entering Phase I clinical trials, thanks to a successful collaboration between Exscientia Ltd and Sumitomo Dainippon Pharma. In the announcement from Exscientia, it states that they will initiate a Phase 1 clinical study of DSP-0038 in the United States for the treatment of Alzheimer's disease psychosis. DSP-0038 is the third molecule created using Exscientia's Artificial Intelligence (AI) technologies to enter clinical trials. The two earlier compounds are DSP-1181, announced in 2020 together with Sumitomo Dainippon Pharma to treat obsessive-compulsive disorder, and Exscientia's immuno-oncology agent, EXS-21546, announced earlier this year. Joint research between Exscientia and Sumitomo Dainippon Pharma designed DSP-0038 to be a single small molecule that exhibits high potency as an antagonist for the 5-HT2A receptor and agonist for the 5-HT1A receptor, whilst selectively avoiding similar receptors and unwanted targets, such as the dopamine D2 receptor.
Paul De Sousa, head of life sciences at Massive Analytic and former researcher at Edinburgh University, writes about a study using artificial precognition AI to analyse results of protein biomarker tests associated with Alzheimer's disease progression. Accounting for over 30 million Disability Adjusted Life Years worldwide, Alzheimer's disease (AD) is a global societal challenge and a threat to healthcare systems around the world. A long history of failures of AD drug trials has highlighted the need for early detection and diagnosis to support patients and clinicians to implement the best life adjustments or medical interventions to alter the course of the disease and personalise the care of those at risk. Biomarkers are measurable indicators of the biological conditions of health, on which disease prognosis and diagnosis is founded. In AD there are a range of diagnostic procedures to detect these biomarkers including testing Cerebrospinal fluid (CSF) and PET scans for markers of amyloid-β and tau that can accurately detect AD pathology, but their cost and invasive nature preclude the broad accessibility required for early detection.
In this paper, we propose a new method to perform data augmentation in a reliable way in the High Dimensional Low Sample Size (HDLSS) setting using a geometry-based variational autoencoder. Our approach combines a proper latent space modeling of the VAE seen as a Riemannian manifold with a new generation scheme which produces more meaningful samples especially in the context of small data sets. The proposed method is tested through a wide experimental study where its robustness to data sets, classifiers and training samples size is stressed. It is also validated on a medical imaging classification task on the challenging ADNI database where a small number of 3D brain MRIs are considered and augmented using the proposed VAE framework. In each case, the proposed method allows for a significant and reliable gain in the classification metrics. For instance, balanced accuracy jumps from 66.3% to 74.3% for a state-of-the-art CNN classifier trained with 50 MRIs of cognitively normal (CN) and 50 Alzheimer disease (AD) patients and from 77.7% to 86.3% when trained with 243 CN and 210 AD while improving greatly sensitivity and specificity metrics.
Learning temporal patterns from multivariate longitudinal data is challenging especially in cases when data is sporadic, as often seen in, e.g., healthcare applications where the data can suffer from irregularity and asynchronicity as the time between consecutive data points can vary across features and samples, hindering the application of existing deep learning models that are constructed for complete, evenly spaced data with fixed sequence lengths. In this paper, a novel deep learning-based model is developed for modeling multiple temporal features in sporadic data using an integrated deep learning architecture based on a recurrent neural network (RNN) unit and a continuous-time autoregressive (CAR) model. The proposed model, called CARRNN, uses a generalized discrete-time autoregressive model that is trainable end-to-end using neural networks modulated by time lags to describe the changes caused by the irregularity and asynchronicity. It is applied to multivariate time-series regression tasks using data provided for Alzheimer's disease progression modeling and intensive care unit (ICU) mortality rate prediction, where the proposed model based on a gated recurrent unit (GRU) achieves the lowest prediction errors among the proposed RNN-based models and state-of-the-art methods using GRUs and long short-term memory (LSTM) networks in their architecture.
The rise of precision medicine is being augmented by greater use of deep learning technologies that provide predictive analytics for earlier diagnosis of a range of debilitating diseases. The latest example comes from researchers at Michigan-based Beaumont Health who used deep learning to analyze genomic DNA. The resulting simple blood test could be used to detect earlier onset of Alzheimer's disease. In a study published this week in the peer-reviewed scientific journal PLOS ONE, the researchers said their analysis discovered 152 "significant" genetic differences among Alzheimer's and healthy patients. Those biomarkers could be used to provide diagnoses before Alzheimer's symptoms develop and a patient's brain is irreversibly damaged.
Jiménez-Mesa, Carmen, Ramírez, Javier, Suckling, John, Vöglein, Jonathan, Levin, Johannes, Górriz, Juan Manuel, ADNI, Alzheimer's Disease Neuroimaging Initiative, DIAN, Dominantly Inherited Alzheimer Network
Discriminative analysis in neuroimaging by means of deep/machine learning techniques is usually tested with validation techniques, whereas the associated statistical significance remains largely under-developed due to their computational complexity. In this work, a non-parametric framework is proposed that estimates the statistical significance of classifications using deep learning architectures. In particular, a combination of autoencoders (AE) and support vector machines (SVM) is applied to: (i) a one-condition, within-group designs often of normal controls (NC) and; (ii) a two-condition, between-group designs which contrast, for example, Alzheimer's disease (AD) patients with NC (the extension to multi-class analyses is also included). A random-effects inference based on a label permutation test is proposed in both studies using cross-validation (CV) and resubstitution with upper bound correction (RUB) as validation methods. This allows both false positives and classifier overfitting to be detected as well as estimating the statistical power of the test. Several experiments were carried out using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the Dominantly Inherited Alzheimer Network (DIAN) dataset, and a MCI prediction dataset. We found in the permutation test that CV and RUB methods offer a false positive rate close to the significance level and an acceptable statistical power (although lower using cross-validation). A large separation between training and test accuracies using CV was observed, especially in one-condition designs. This implies a low generalization ability as the model fitted in training is not informative with respect to the test set. We propose as solution by applying RUB, whereby similar results are obtained to those of the CV test set, but considering the whole set and with a lower computational cost per iteration.
BOSTON - New treatments for Alzheimer's disease are desperately needed, but numerous clinical trials of investigational drugs have failed to generate promising options. Now a team at Massachusetts General Hospital (MGH) and Harvard Medical School (HMS) has developed an artificial intelligence-based method to screen currently available medications as possible treatments for Alzheimer's disease. The method could represent a rapid and inexpensive way to repurpose existing therapies into new treatments for this progressive, debilitating neurodegenerative condition. Importantly, it could also help reveal new, unexplored targets for therapy by pointing to mechanisms of drug action. "Repurposing FDA-approved drugs for Alzheimer's disease is an attractive idea that can help accelerate the arrival of effective treatment--but unfortunately, even for previously approved drugs, clinical trials require substantial resources, making it impossible to evaluate every drug in patients with Alzheimer's disease," explains Artem Sokolov, PhD, director of Informatics and Modeling at the Laboratory of Systems Pharmacology at HMS. "We therefore built a framework for prioritizing drugs, helping clinical studies to focus on the most promising ones."
New treatments for Alzheimer's disease are desperately needed, but numerous clinical trials of investigational drugs have failed to generate promising options. Now a team at Massachusetts General Hospital (MGH) and Harvard Medical School (HMS) has developed an artificial intelligence based method to screen currently available medications as possible treatments for Alzheimer's disease. The method could represent a rapid and inexpensive way to repurpose existing therapies into new treatments for this progressive, debilitating neurodegenerative condition. Importantly, it could also help reveal new, unexplored targets for therapy by pointing to mechanisms of drug action. "Repurposing FDA-approved drugs for Alzheimer's disease is an attractive idea that can help accelerate the arrival of effective treatment - but unfortunately, even for previously approved drugs, clinical trials require substantial resources, making it impossible to evaluate every drug in patients with Alzheimer's disease," explains Artem Sokolov, PhD, director of Informatics and Modeling at the Laboratory of Systems Pharmacology at HMS. "We therefore built a framework for prioritizing drugs, helping clinical studies to focus on the most promising ones."