Collaborating Authors


What does artificial intelligence do in medicine?


This article was written for The European Sting by our guest writer, Mr. Jakub Kufel medical student at Silesia Medical University, Poland. The opinions expressed within reflect only the writer's views and not necessarily The European Sting's position on the issue. Artificial intelligence (AI) is a general concept that assumes the use of a computer to model intelligent behavior with the least human intervention. The term comes from the Czech word robot, which means biosynthetic machines used as forced labor. This term applies to a wide range of medical articles such as robotics, medical diagnosis, medical statistics, and human biology, up to today's "omic".

Data-driven models and computational tools for neurolinguistics: a language technology perspective Machine Learning

In this paper, our focus is the connection and influence of language technologies on the research in neurolinguistics. We present a review of brain imaging-based neurolinguistic studies with a focus on the natural language representations, such as word embeddings and pre-trained language models. Mutual enrichment of neurolinguistics and language technologies leads to development of brain-aware natural language representations. The importance of this research area is emphasized by medical applications.

Fine-grain atlases of functional modes for fMRI analysis Machine Learning

Population imaging markedly increased the size of functional-imaging datasets, shedding new light on the neural basis of inter-individual differences. Analyzing these large data entails new scalability challenges, computational and statistical. For this reason, brain images are typically summarized in a few signals, for instance reducing voxel-level measures with brain atlases or functional modes. A good choice of the corresponding brain networks is important, as most data analyses start from these reduced signals. We contribute finely-resolved atlases of functional modes, comprising from 64 to 1024 networks. These dictionaries of functional modes (DiFuMo) are trained on millions of fMRI functional brain volumes of total size 2.4TB, spanned over 27 studies and many research groups. We demonstrate the benefits of extracting reduced signals on our fine-grain atlases for many classic functional data analysis pipelines: stimuli decoding from 12,334 brain responses, standard GLM analysis of fMRI across sessions and individuals, extraction of resting-state functional-connectomes biomarkers for 2,500 individuals, data compression and meta-analysis over more than 15,000 statistical maps. In each of these analysis scenarii, we compare the performance of our functional atlases with that of other popular references, and to a simple voxel-level analysis. Results highlight the importance of using high-dimensional "soft" functional atlases, to represent and analyse brain activity while capturing its functional gradients. Analyses on high-dimensional modes achieve similar statistical performance as at the voxel level, but with much reduced computational cost and higher interpretability. In addition to making them available, we provide meaningful names for these modes, based on their anatomical location. It will facilitate reporting of results.

A Comprehensive Scoping Review of Bayesian Networks in Healthcare: Past, Present and Future Artificial Intelligence

No comprehensive review of Bayesian networks (BNs) in healthcare has been published in the past, making it difficult to organize the research contributions in the present and identify challenges and neglected areas that need to be addressed in the future. This unique and novel scoping review of BNs in healthcare provides an analytical framework for comprehensively characterizing the domain and its current state. The review shows that: (1) BNs in healthcare are not used to their full potential; (2) a generic BN development process is lacking; (3) limitations exists in the way BNs in healthcare are presented in the literature, which impacts understanding, consensus towards systematic methodologies, practice and adoption of BNs; and (4) a gap exists between having an accurate BN and a useful BN that impacts clinical practice. This review empowers researchers and clinicians with an analytical framework and findings that will enable understanding of the need to address the problems of restricted aims of BNs, ad hoc BN development methods, and the lack of BN adoption in practice. To map the way forward, the paper proposes future research directions and makes recommendations regarding BN development methods and adoption in practice.

Differentiable Graph Module (DGM) Graph Convolutional Networks Machine Learning

Graph deep learning has recently emerged as a powerful ML concept allowing to generalize successful deep neural architectures to non-Euclidean structured data. Such methods have shown promising results on a broad spectrum of applications ranging from social science, biomedicine, and particle physics to computer vision, graphics, and chemistry. One of the limitations of the majority of the current graph neural network architectures is that they are often restricted to the transductive setting and rely on the assumption that the underlying graph is known and fixed. In many settings, such as those arising in medical and healthcare applications, this assumption is not necessarily true since the graph may be noisy, partially- or even completely unknown, and one is thus interested in inferring it from the data. This is especially important in inductive settings when dealing with nodes not present in the graph at training time. Furthermore, sometimes such a graph itself may convey insights that are even more important than the downstream task. In this paper, we introduce Differentiable Graph Module (DGM), a learnable function predicting the edge probability in the graph relevant for the task, that can be combined with convolutional graph neural network layers and trained in an end-to-end fashion. We provide an extensive evaluation of applications from the domains of healthcare (disease prediction), brain imaging (gender and age prediction), computer graphics (3D point cloud segmentation), and computer vision (zero-shot learning). We show that our model provides a significant improvement over baselines both in transductive and inductive settings and achieves state-of-the-art results.

Network Clustering Via Kernel-ARMA Modeling and the Grassmannian The Brain-Network Case Machine Learning

Background Network clustering is the task of assigning nodes to groups via user-defined (statistical) "similarities" among nodal time series (signals), and is ubiquitous across a plethora of disciplines such as computer vision [1], wireless-sensor [2], social [3] and brain networks [4]. In brain networks, the choice of scale and type of data determine how networks are built. At the microscopic level, network nodes might be neurons, and edges could represent anatomical connections such as synapses (structural connectivity), or statistical relationships between firing patterns of neurons (functional connectivity). Similarly, at the macroscopic level, nodes can represent brain regions. At this scale, in structural networks, edges might represent long range anatomical connections between brain regions or, in functional networks, statistical relationships between regional brain dynamics recorded via functional Magnetic Resonance Imaging (fMRI) or encephalopathy (EEG). Here, we are interested in functional brain networks in which network nodes represent brain regions whose activity can be represented by a time series describing the dynamic evolution of brain activity.[5];

DVNet: A Memory-Efficient Three-Dimensional CNN for Large-Scale Neurovascular Reconstruction Machine Learning

Maps of brain microarchitecture are important for understanding neurological function and behavior, including alterations caused by chronic conditions such as neurodegenerative disease. Techniques such as knife-edge scanning microscopy (KESM) provide the potential for whole organ imaging at sub-cellular resolution. However, multi-terabyte data sizes make manual annotation impractical and automatic segmentation challenging. Densely packed cells combined with interconnected microvascular networks are a challenge for current segmentation algorithms. The massive size of high-throughput microscopy data necessitates fast and largely unsupervised algorithms. In this paper, we investigate a fully-convolutional, deep, and densely-connected encoder-decoder for pixel-wise semantic segmentation. The excessive memory complexity often encountered with deep and dense networks is mitigated using skip connections, resulting in fewer parameters and enabling a significant performance increase over prior architectures. The proposed network provides superior performance for semantic segmentation problems applied to open-source benchmarks. We finally demonstrate our network for cellular and microvascular segmentation, enabling quantitative metrics for organ-scale neurovascular analysis.

EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and their Applications Artificial Intelligence

Brain-Computer Interface (BCI) is a powerful communication tool between users and systems, which enhances the capability of the human brain in communicating and interacting with the environment directly. Advances in neuroscience and computer science in the past decades have led to exciting developments in BCI, thereby making BCI a top interdisciplinary research area in computational neuroscience and intelligence. Recent technological advances such as wearable sensing devices, real-time data streaming, machine learning, and deep learning approaches have increased interest in electroencephalographic (EEG) based BCI for translational and healthcare applications. Many people benefit from EEG-based BCIs, which facilitate continuous monitoring of fluctuations in cognitive states under monotonous tasks in the workplace or at home. In this study, we survey the recent literature of EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensated for the gaps in the systematic summary of the past five years (2015-2019). In specific, we first review the current status of BCI and its significant obstacles. Then, we present advanced signal sensing and enhancement technologies to collect and clean EEG signals, respectively. Furthermore, we demonstrate state-of-art computational intelligence techniques, including interpretable fuzzy models, transfer learning, deep learning, and combinations, to monitor, maintain, or track human cognitive states and operating performance in prevalent applications. Finally, we deliver a couple of innovative BCI-inspired healthcare applications and discuss some future research directions in EEG-based BCIs.

2019--A Year of Hope for Alzheimer's Research


In the year just past, Alzheimer's researchers, families, and stakeholders felt renewed hope that new treatments might be within grasp. While the Lazarus story of aducanumab may or may not be enough for FDA approval this year, data from its Phase 3 program solidified a broader signal across four different anti-amyloid antibodies that amyloid can be removed from the brain and that maybe--just maybe--this will also benefit cognition and function if given early at a sufficient dose. The prospect that the amyloid hypothesis is druggable, alone, was enough to re-energize the field. The hope that further trials to define the best doses, patient groups, and treatment regimens will eventually pay off was cause for even more enthusiasm. A boost in funding announced as the U.S. Congress headed for its holiday break also gave cause for celebration going into 2020, though the funding picture is less rosy in other countries. The NIH budget for AD research now stands at $2.8 billion, a $350 million ...