Neuroscientists Transform Brain Activity to Speech with AI

#artificialintelligence

Artificial intelligence is enabling many scientific breakthroughs, especially in fields of study that generate high volumes of complex data such as neuroscience. As impossible as it may seem, neuroscientists are making strides in decoding neural activity into speech using artificial neural networks. Yesterday, the neuroscience team of Gopala K. Anumanchipalli, Josh Chartier, and Edward F. Chang of University of California San Francisco (UCSF) published in Nature their study using artificial intelligence and a state-of-the-art brain-machine interface to produce synthetic speech from brain recordings. The concept is relatively straightforward--record the brain activity and audio of participants while they are reading aloud in order to create a system that decodes brain signals for vocal tract movements, then synthesize speech from the decoded movements. The execution of the concept required sophisticated finessing of cutting-edge AI techniques and tools.


Predicting Treatment Initiation from Clinical Time Series Data via Graph-Augmented Time-Sensitive Model

arXiv.org Machine Learning

Many computational models were proposed to extract temporal patterns from clinical time series for each patient and among patient group for predictive healthcare. However, the common relations among patients (e.g., share the same doctor) were rarely considered. In this paper, we represent patients and clinicians relations by bipartite graphs addressing for example from whom a patient get a diagnosis. We then solve for the top eigenvectors of the graph Laplacian, and include the eigenvectors as latent representations of the similarity between patient-clinician pairs into a time-sensitive prediction model. We conducted experiments using real-world data to predict the initiation of first-line treatment for Chronic Lymphocytic Leukemia (CLL) patients. Results show that relational similarity can improve prediction over multiple baselines, for example a 5% incremental over long-short term memory baseline in terms of area under precision-recall curve.


Learning to Exploit Invariances in Clinical Time-Series Data using Sequence Transformer Networks

arXiv.org Machine Learning

Recently, researchers have started applying convolutional neural networks (CNNs) with one-dimensional convolutions to clinical tasks involving time-series data. This is due, in part, to their computational efficiency, relative to recurrent neural networks and their ability to efficiently exploit certain temporal invariances, (e.g., phase invariance). However, it is well-established that clinical data may exhibit many other types of invariances (e.g., scaling). While preprocessing techniques, (e.g., dynamic time warping) may successfully transform and align inputs, their use often requires one to identify the types of invariances in advance. In contrast, we propose the use of Sequence Transformer Networks, an end-to-end trainable architecture that learns to identify and account for invariances in clinical time-series data. Applied to the task of predicting in-hospital mortality, our proposed approach achieves an improvement in the area under the receiver operating characteristic curve (AUROC) relative to a baseline CNN (AUROC=0.851 vs. AUROC=0.838). Our results suggest that a variety of valuable invariances can be learned directly from the data.


Improving localization-based approaches for breast cancer screening exam classification

arXiv.org Machine Learning

We trained and evaluated a localization-based deep CNN for breast cancer screening exam classification on over 200,000 exams (over 1,000,000 images). Our model achieves an AUC of 0.919 in predicting malignancy in patients undergoing breast cancer screening, reducing the error rate of the baseline (Wu et al., 2019a) by 23%. In addition, the models generates bounding boxes for benign and malignant findings, providing interpretable predictions.


Click click snap: One look at patient's face, and AI can identify rare genetic diseases

#artificialintelligence

WASHINGTON D.C. [USA]: According to a recent study, a new artificial intelligence technology can accurately identify rare genetic disorders using a photograph of a patient's face. Named DeepGestalt, the AI technology outperformed clinicians in identifying a range of syndromes in three trials and could add value in personalised care, CNN reported. The study was published in the journal Nature Medicine. According to the study, eight per cent of the population has disease with key genetic components and many may have recognisable facial features. The study further adds that the technology could identify, for example, Angelman syndrome, a disorder affecting the nervous system with characteristic features such as a wide mouth with widely spaced teeth etc. Speaking about it, Yaron Gurovich, the chief technology officer at FDNA and lead researcher of the study said, "It demonstrates how one can successfully apply state of the art algorithms, such as deep learning, to a challenging field where the available data is small, unbalanced in terms of available patients per condition, and where the need to support a large amount of conditions is great."