Artificial intelligence is enabling many scientific breakthroughs, especially in fields of study that generate high volumes of complex data such as neuroscience. As impossible as it may seem, neuroscientists are making strides in decoding neural activity into speech using artificial neural networks. Yesterday, the neuroscience team of Gopala K. Anumanchipalli, Josh Chartier, and Edward F. Chang of University of California San Francisco (UCSF) published in Nature their study using artificial intelligence and a state-of-the-art brain-machine interface to produce synthetic speech from brain recordings. The concept is relatively straightforward--record the brain activity and audio of participants while they are reading aloud in order to create a system that decodes brain signals for vocal tract movements, then synthesize speech from the decoded movements. The execution of the concept required sophisticated finessing of cutting-edge AI techniques and tools.
IBM said on Thursday it will spend $240 million over the next decade to fund a new artificial intelligence research lab at the Massachusetts Institute of Technology. The resulting MIT–IBM Watson AI Lab will focus on a handful of key AI areas including the development of new "deep learning" algorithms. Deep learning is a subset of AI that aims to bring human-like learning capabilities to computers so they can operate more autonomously. The Cambridge, Mass.-based lab will be led by Dario Gil, vice president of AI for IBM Research and Anantha Chandrakasan, dean of MIT's engineering school. It will draw upon about 100 researchers from IBM (ibm) itself and the university.
The Centers for Disease Control and Prevention (CDC) coordinates a labor-intensive process to measure the prevalence of autism spectrum disorder (ASD) among children in the United States. Random forests methods have shown promise in speeding up this process, but they lag behind human classification accuracy by about 5 percent. We explore whether newer document classification algorithms can close this gap. We applied 6 supervised learning algorithms to predict whether children meet the case definition for ASD based solely on the words in their evaluations. We compared the algorithms? performance across 10 random train-test splits of the data, and then, we combined our top 3 classifiers to estimate the Bayes error rate in the data. Across the 10 train-test cycles, the random forest, neural network, and support vector machine with Naive Bayes features (NB-SVM) each achieved slightly more than 86.5 percent mean accuracy. The Bayes error rate is estimated at approximately 12 percent meaning that the model error for even the simplest of our algorithms, the random forest, is below 2 percent. NB-SVM produced significantly more false positives than false negatives. The random forest performed as well as newer models like the NB-SVM and the neural network. NB-SVM may not be a good candidate for use in a fully-automated surveillance workflow due to increased false positives. More sophisticated algorithms, like hierarchical convolutional neural networks, would not perform substantially better due to characteristics of the data. Deep learning models performed similarly to traditional machine learning methods at predicting the clinician-assigned case status for CDC's autism surveillance system. While deep learning methods had limited benefit in this task, they may have applications in other surveillance systems.
Fruehwirt, Wolfgang, Cobb, Adam D., Mairhofer, Martin, Weydemann, Leonard, Garn, Heinrich, Schmidt, Reinhold, Benke, Thomas, Dal-Bianco, Peter, Ransmayr, Gerhard, Waser, Markus, Grossegger, Dieter, Zhang, Pengfei, Dorffner, Georg, Roberts, Stephen
As societies around the world are ageing, the number of Alzheimer's disease (AD) patients is rapidly increasing. To date, no low-cost, non-invasive biomarkers have been established to advance the objectivization of AD diagnosis and progression assessment. Here, we utilize Bayesian neural networks to develop a multivariate predictor for AD severity using a wide range of quantitative EEG (QEEG) markers. The Bayesian treatment of neural networks both automatically controls model complexity and provides a predictive distribution over the target function, giving uncertainty bounds for our regression task. It is therefore well suited to clinical neuroscience, where data sets are typically sparse and practitioners require a precise assessment of the predictive uncertainty. We use data of one of the largest prospective AD EEG trials ever conducted to demonstrate the potential of Bayesian deep learning in this domain, while comparing two distinct Bayesian neural network approaches, i.e., Monte Carlo dropout and Hamiltonian Monte Carlo.
We trained and evaluated a localization-based deep CNN for breast cancer screening exam classification on over 200,000 exams (over 1,000,000 images). Our model achieves an AUC of 0.919 in predicting malignancy in patients undergoing breast cancer screening, reducing the error rate of the baseline (Wu et al., 2019a) by 23%. In addition, the models generates bounding boxes for benign and malignant findings, providing interpretable predictions.