Fruehwirt, Wolfgang, Cobb, Adam D., Mairhofer, Martin, Weydemann, Leonard, Garn, Heinrich, Schmidt, Reinhold, Benke, Thomas, Dal-Bianco, Peter, Ransmayr, Gerhard, Waser, Markus, Grossegger, Dieter, Zhang, Pengfei, Dorffner, Georg, Roberts, Stephen
As societies around the world are ageing, the number of Alzheimer's disease (AD) patients is rapidly increasing. To date, no low-cost, non-invasive biomarkers have been established to advance the objectivization of AD diagnosis and progression assessment. Here, we utilize Bayesian neural networks to develop a multivariate predictor for AD severity using a wide range of quantitative EEG (QEEG) markers. The Bayesian treatment of neural networks both automatically controls model complexity and provides a predictive distribution over the target function, giving uncertainty bounds for our regression task. It is therefore well suited to clinical neuroscience, where data sets are typically sparse and practitioners require a precise assessment of the predictive uncertainty. We use data of one of the largest prospective AD EEG trials ever conducted to demonstrate the potential of Bayesian deep learning in this domain, while comparing two distinct Bayesian neural network approaches, i.e., Monte Carlo dropout and Hamiltonian Monte Carlo.
Variational auto-encoders (VAE) are scalable and powerful generative models. However, the choice of the variational posterior determines tractability and flexibility of the VAE. Commonly, latent variables are modeled using the normal distribution with a diagonal covariance matrix. This results in computational efficiency but typically it is not flexible enough to match the true posterior distribution. One fashion of enriching the variational posterior distribution is application of normalizing flows, i.e., a series of invertible transformations to latent variables with a simple posterior. In this paper, we follow this line of thinking and propose a volume-preserving flow that uses a series of Householder transformations. We show empirically on MNIST dataset and histopathology data that the proposed flow allows to obtain more flexible variational posterior and competitive results comparing to other normalizing flows.
Diagnosing disease is one of the more labor-intensive aspects of the healthcare system. It also happens to be one that is particularly well-suited to being performed by machine learning algorithms. While work in this area is in its early stages, the technology is evolving rapidly and appears poised to transform diagnostic medicine.
Diagnosing disease is one of the more labor-intensive aspects of the healthcare system. It also happens to be one that is particularly well-suited to being performed by machine learning algorithms. While work in this area is in its early stages, the technology is evolving rapidly and appears poised to transform diagnostic medicine. Thanks largely to the huge volumes of data collected from patients, medical diagnostics is an ideal domain for machine learning. Much of the diagnostic data is image-based, such as X-rays, MRI scans, and ultrasound imagery, but can also include things like genomic profiles, epidemiological data, blood tests, biopsy results, and even medical research papers.
I made a automated skin disease diagnosis DEMO website based on deep learning algorithm (Model Dermatology; http://ModelDerm.com). ResNet152 and VGG19 were used as a CNN model, around 300,000 images (179 class;176 skin disorders) were used as a trainining dataset. The training images were collected from 4 university hospitals in Korea. This CNN model is the successor to my onychomycosis model (http://nail.medicalphoto.org). The web-based test platform provides 3 differential diagnosis after analyzing image.