Goto

Collaborating Authors


Binary and Multiclass Classifiers based on Multitaper Spectral Features for Epilepsy Detection

arXiv.org Machine Learning

Epilepsy is one of the most common neurological disorders that can be diagnosed through electroencephalogram (EEG), in which the following epileptic events can be observed: pre-ictal, ictal, post-ictal, and interictal. In this paper, we present a novel method for epilepsy detection into two differentiation contexts: binary and multiclass classification. For feature extraction, a total of 105 measures were extracted from power spectrum, spectrogram, and bispectrogram. For classifier building, eight different machine learning algorithms were used. Our method was applied in a widely used EEG database. As a result, random forest and backpropagation based on multilayer perceptron algorithms reached the highest accuracy for binary (98.75%) and multiclass (96.25%) classification problems, respectively. Subsequently, the statistical tests did not find a model that would achieve a better performance than the other classifiers. In the evaluation based on confusion matrices, it was also not possible to identify a classifier that stands out in relation to other models for EEG classification. Even so, our results are promising and competitive with the findings in the literature.


MIT researchers warn that deep learning is approaching computational limits

#artificialintelligence

That's according to researchers at the Massachusetts Institute of Technology, Underwood International College, and the University of Brasilia, who found in a recent study that progress in deep learning has been "strongly reliant" on increases in compute. It's their assertion that continued progress will require "dramatically" more computationally efficient deep learning methods, either through changes to existing techniques or via new as-yet-undiscovered methods. "We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modeling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive," the coauthors wrote. "Despite this, we find that the actual computational burden of deep learning models is scaling more rapidly than (known) lower bounds from theory, suggesting that substantial improvements might be possible."


MIT researchers warn that deep learning is reaching its computational limit

#artificialintelligence

The rising demand for Deep Learning is so massive and complex that we are reaching the computational limits of the technology. A recent study suggests that progress in deep learning is heavily dependent on the increase in computational abilities. Researchers from Massachusetts Institute of Technology (MIT), MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia found in a recent study that deep learning is strong reliant on the increase in compute. The researchers believe that the continuous progress in Deep Learning will require dramatically more computational methods. In the research paper, co-authors wrote, "We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modelling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive. Despite this, we find that the actual computational burden of deep learning models is scaling more rapidly than (known) lower bounds from theory, suggesting that substantial improvements might be possible."