Goto

Collaborating Authors

Results


DeepMind releases AlphaFold database of nearly all human protein structures

#artificialintelligence

British artificial intelligence giant DeepMind has released a database of nearly all human protein structures that it amassed as part of its AlphaFold program. Last year, the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP) recognised AlphaFold as a solution to the grand challenge of figuring out what shapes proteins fold into. "We have been stuck on this one problem – how do proteins fold up – for nearly 50 years. To see DeepMind produce a solution for this, having worked personally on this problem for so long and after so many stops and starts, wondering if we'd ever get there, is a very special moment." AlphaFold is a major scientific advance that will play a crucial role in helping scientists to solve important problems such as the protein misfolding associated with diseases such as Alzheimer's, Parkinson's, cystic fibrosis and Huntington's disease.


Brain signals 'speak for person with paralysis

Science

A man unable to speak after a stroke has produced sentences through a system that reads electrical signals from speech production areas of his brain, researchers report this week. The approach has previously been used in nondisabled volunteers to reconstruct spoken or imagined sentences. But this first demonstration in a person who is paralyzed “tackles really the main issue that was left to be tackled—bringing this to the patients that really need it,” says Christian Herff, a computer scientist at Maastricht University who was not involved in the new work. The participant had a stroke more than a decade ago that left him with anarthria—an inability to control the muscles involved in speech. Because his limbs are also paralyzed, he communicates by selecting letters on a screen using small movements of his head, producing roughly five words per minute. To enable faster, more natural communication, neurosurgeon Edward Chang of the University of California, San Francisco, tested an approach that uses a computational model known as a deep-learning algorithm to interpret patterns of brain activity in the sensorimotor cortex, a brain region involved in producing speech ( Science , 4 January 2019, p. [14][1]). The approach has so far been tested in volunteers who have electrodes surgically implanted for nonresearch reasons such as to monitor epileptic seizures. In the new study, Chang's team temporarily removed a portion of the participant's skull and laid a thin sheet of electrodes smaller than a credit card directly over his sensorimotor cortex. To “train” a computer algorithm to associate brain activity patterns with the onset of speech and with particular words, the team needed reliable information about what the man intended to say and when. So the researchers repeatedly presented one of 50 words on a screen and asked the man to attempt to say it on cue. Once the algorithm was trained with data from the individual word task, the man tried to read sentences built from the same set of 50 words, such as “Bring my glasses, please.” To improve the algorithm's guesses, the researchers added a processing component called a natural language model, which uses common word sequences to predict the likely next word in a sentence. With that approach, the system only got about 25% of the words in a sentence wrong, they report this week in The New England Journal of Medicine . That's “pretty impressive,” says Stephanie Riès-Cornou, a neuroscientist at San Diego State University. (The error rate for chance performance would be 92%.) Because the brain reorganizes over time, it wasn't clear that speech production areas would give interpretable signals after more than 10 years of anarthria, notes Anne-Lise Giraud, a neuroscientist at the University of Geneva. The signals' preservation “is surprising,” she says. And Herff says the team made a “gigantic” step by generating sentences as the man was attempting to speak rather than from previously recorded brain data, as most studies have done. With the new approach, the man could produce sentences at a rate of up to 18 words per minute, Chang says. That's roughly comparable to the speed achieved with another brain-computer interface, described in Nature in May. That system decoded individual letters from activity in a brain area responsible for planning hand movements while a person who was paralyzed imagined handwriting. These speeds are still far from the 120 to 180 words per minute typical of conversational English, Riès-Cornou notes, but they far exceed what the participant can achieve with his head-controlled device. The system isn't ready for use in everyday life, Chang notes. Future improvements will include expanding its repertoire of words and making it wireless, so the user isn't tethered to a computer roughly the size of a minifridge. [1]: http://www.sciencemag.org/content/363/6422/14


Trainee Rounds seminars: AI in Medicine

#artificialintelligence

DATE: August 10, 2021 (Tuesday) TIME: 12pm to 1pm HOW: Zoom meeting AUDIENCE: This event is open to the public. Anastasia Razdaibiedina PhD student, Computational Biology and Machine Learning, University of Toronto Discovering gene-disease relationships with deep learning Understanding the genetic causes of diseases is one of the central goals in medicine. Most diseases have a complex genetic basis, and genes often act in'modules' to determine phenotypes. An effective way to discover a module of disease-associated genes, is to use biological networks, or interactomes, that describe interactions between genes and proteins. Here we use deep learning methods to infer an interactome computationally from microscopy imaging data, and subsequently discover gene-disease relationships from the constructed interactome.


A Robot Has Learned to Combine Vision and Touch - Neuroscience News

#artificialintelligence

Summary: Combining deep learning algorithms with robotic engineering, researchers have developed a new robot able to combine vision and touch. On the new EBRAINS research infrastructure, scientists of the Human Brain Project have connected brain-inspired deep learning to biomimetic robots. How the brain lets us perceive and navigate the world is one of the most fascinating aspects of cognition. When orienting ourselves, we constantly combine information from all six senses in a seemingly effortless way–a feature that even the most advanced AI systems struggle to replicate. On the new EBRAINS research infrastructure, cognitive neuroscientists, computational modelers, and roboticists are now working together to shed new light on the neural mechanisms behind this, by creating robots whose internal workings mimic the brain.


Deep Learning for AI

Communications of the ACM

Yoshua Bengio, Yann LeCun, and Geoffrey Hinton are recipients of the 2018 ACM A.M. Turing Award for breakthroughs that have made deep neural networks a critical component of computing. Research on artificial neural networks was motivated by the observation that human intelligence emerges from highly parallel networks of relatively simple, non-linear neurons that learn by adjusting the strengths of their connections. This observation leads to a central computational question: How is it possible for networks of this general kind to learn the complicated internal representations that are required for difficult tasks such as recognizing objects or understanding language? Deep learning seeks to answer this question by using many layers of activity vectors as representations and learning the connection strengths that give rise to these vectors by following the stochastic gradient of an objective function that measures how well the network is performing. It is very surprising that such a conceptually simple approach has proved to be so effective when applied to large training sets using huge amounts of computation and it appears that a key ingredient is depth: shallow networks simply do not work as well. We reviewed the basic concepts and some of the breakthrough achievements of deep learning several years ago.63 Here we briefly describe the origins of deep learning, describe a few of the more recent advances, and discuss some of the future challenges. These challenges include learning with little or no external supervision, coping with test examples that come from a different distribution than the training examples, and using the deep learning approach for tasks that humans solve by using a deliberate sequence of steps which we attend to consciously--tasks that Kahneman56 calls system 2 tasks as opposed to system 1 tasks like object recognition or immediate natural language understanding, which generally feel effortless. There are two quite different paradigms for AI.


Dive into Deep Learning

arXiv.org Artificial Intelligence

Just a few years ago, there were no legions of deep learning scientists developing intelligent products and services at major companies and startups. When the youngest among us (the authors) entered the field, machine learning did not command headlines in daily newspapers. Our parents had no idea what machine learning was, let alone why we might prefer it to a career in medicine or law. Machine learning was a forward-looking academic discipline with a narrow set of real-world applications. And those applications, e.g., speech recognition and computer vision, required so much domain knowledge that they were often regarded as separate areas entirely for which machine learning was one small component. Neural networks then, the antecedents of the deep learning models that we focus on in this book, were regarded as outmoded tools. In just the past five years, deep learning has taken the world by surprise, driving rapid progress in fields as diverse as computer vision, natural language processing, automatic speech recognition, reinforcement learning, and statistical modeling. With these advances in hand, we can now build cars that drive themselves with more autonomy than ever before (and less autonomy than some companies might have you believe), smart reply systems that automatically draft the most mundane emails, helping people dig out from oppressively large inboxes, and software agents that dominate the worldʼs best humans at board games like Go, a feat once thought to be decades away. Already, these tools exert ever-wider impacts on industry and society, changing the way movies are made, diseases are diagnosed, and playing a growing role in basic sciences--from astrophysics to biology.


EMG Signal Classification Using Reflection Coefficients and Extreme Value Machine

arXiv.org Artificial Intelligence

Electromyography is a promising approach to the gesture recognition of humans if an efficient classifier with high accuracy is available. In this paper, we propose to utilize Extreme Value Machine (EVM) as a high-performance algorithm for the classification of EMG signals. We employ reflection coefficients obtained from an Autoregressive (AR) model to train a set of classifiers. Our experimental results indicate that EVM has better accuracy in comparison to the conventional classifiers approved in the literature based on K-Nearest Neighbors (KNN) and Support Vector Machine (SVM).


Disentangling Identifiable Features from Noisy Data with Structured Nonlinear ICA

arXiv.org Machine Learning

We introduce a new general identifiable framework for principled disentanglement referred to as Structured Nonlinear Independent Component Analysis (SNICA). Our contribution is to extend the identifiability theory of deep generative models for a very broad class of structured models. While previous works have shown identifiability for specific classes of time-series models, our theorems extend this to more general temporal structures as well as to models with more complex structures such as spatial dependencies. In particular, we establish the major result that identifiability for this framework holds even in the presence of noise of unknown distribution. The SNICA setting therefore subsumes all the existing nonlinear ICA models for time-series and also allows for new much richer identifiable models. Finally, as an example of our framework's flexibility, we introduce the first nonlinear ICA model for time-series that combines the following very useful properties: it accounts for both nonstationarity and autocorrelation in a fully unsupervised setting; performs dimensionality reduction; models hidden states; and enables principled estimation and inference by variational maximum-likelihood.


Contrastive Learning with Continuous Proxy Meta-Data for 3D MRI Classification

arXiv.org Machine Learning

Traditional supervised learning with deep neural networks requires a tremendous amount of labelled data to converge to a good solution. For 3D medical images, it is often impractical to build a large homogeneous annotated dataset for a specific pathology. Self-supervised methods offer a new way to learn a representation of the images in an unsupervised manner with a neural network. In particular, contrastive learning has shown great promises by (almost) matching the performance of fully-supervised CNN on vision tasks. Nonetheless, this method does not take advantage of available meta-data, such as participant's age, viewed as prior knowledge. Here, we propose to leverage continuous proxy metadata, in the contrastive learning framework, by introducing a new loss called y-Aware InfoNCE loss. Specifically, we improve the positive sampling during pre-training by adding more positive examples with similar proxy meta-data with the anchor, assuming they share similar discriminative semantic features.With our method, a 3D CNN model pre-trained on $10^4$ multi-site healthy brain MRI scans can extract relevant features for three classification tasks: schizophrenia, bipolar diagnosis and Alzheimer's detection. When fine-tuned, it also outperforms 3D CNN trained from scratch on these tasks, as well as state-of-the-art self-supervised methods. Our code is made publicly available here.


Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence

arXiv.org Artificial Intelligence

Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and performs seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences by using MAR devices to provide universal accessibility to digital contents. Over the past 20 years, a number of MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discusses the latest studies on MAR through a top-down approach: 1) MAR applications; 2) MAR visualisation techniques adaptive to user mobility and contexts; 3) systematic evaluation of MAR frameworks including supported platforms and corresponding features such as tracking, feature extraction plus sensing capabilities; and 4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields, current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.