Ribeiro, Antônio H., Ribeiro, Manoel Horta, Paixão, Gabriela M. M., Oliveira, Derick M., Gomes, Paulo R., Canazart, Jéssica A., Ferreira, Milton P. S., Andersson, Carl R., Macfarlane, Peter W., Meira, Wagner Jr., Schön, Thomas B., Ribeiro, Antonio Luiz P.
We present a Deep Neural Network (DNN) model for predicting electrocardiogram (ECG) abnormalities in short-duration 12-lead ECG recordings. The analysis of the digital ECG obtained in a clinical setting can provide a full evaluation of the cardiac electrical activity and have not been studied in an end-to-end machine learning scenario. Using the database of the Telehealth Network of Minas Gerais, under the scope of the CODE (Clinical Outcomes in Digital Electrocardiology) study, we built a novel dataset with more than 2 million ECG tracings, orders of magnitude larger than those used in previous studies. Moreover, our dataset is more realistic, as it consists of 12-lead ECGs recorded during standard in-clinic exams. Using this data, we trained a residual neural network with 9 convolutional layers to map ECG signals with a duration of 7 to 10 seconds into 6 different classes of ECG abnormalities. High-performance measures were obtained for all ECG abnormalities, with F1 scores above $80\%$ and specificity indexes over $99\%$. We compare the performance with cardiology and emergency resident medical doctors as well as medical students and, considering the F1 score, the DNN matches or outperforms the medical residents and students for all abnormalities. These results indicate that end-to-end automatic ECG analysis based on DNNs, previously used only in a single-lead setup, generalizes well to the 12-lead ECG. This is an important result in that it takes this technology much closer to standard clinical practice.
Abstract--Most of the research in convolutional neural networks hasfocused on increasing network depth to improve accuracy, resulting in a massive number of parameters which restricts the trained network to platforms with memory and processing constraints. We propose to modify the structure of the Very Deep Convolutional Neural Networks (VDCNN) model to fit mobile platforms constraints and keep performance. In this paper, we evaluate the impact of Temporal Depthwise Separable Convolutions and Global Average Pooling in the network parameters, storagesize, and latency. The squeezed model (SVDCNN) is between 10x and 20x smaller, depending on the network depth, maintaining a maximum size of 6MB. Regarding accuracy, the network experiences a loss between 0.4% and 1.3% and obtains lower latencies compared to the baseline model. I. INTRODUCTION The general trend in deep learning approaches has been developing models with increasing layers. Deep models can also learn hierarchical feature representations from images .
Microsoft India is launching a research group that will leverage artificial intelligence to deliver large-scale eye care in collaboration with Hyderabad-based L V Prasad Eye Institute. The Microsoft Intelligent Network for Eyecare (MINE) will work with a consortium of research and technology institutions around the world, including the University of Miami, Federal University of Sao Paulo and Australia's Brien Holden Vision Institute. The idea is similar to Google DeepMind's project, which targets the UK and works with their National Health Services to use artificial intelligence to detect and treat blindness-causing eye diseases. India is a logical jumping-off point for the project, as it is home to some 55 million of the world's 285 million people living with vision impairment. Using Microsoft's cloud platform technology Cortana Intelligence Suite, MINE will collaborate and work from datasets of patients around the world to develop machine learning predictive models for vision impairment and eye disease, with the ultimate goal of eliminating avoidable blindness and scaling worldwide delivery of eye care services.
It's official: driverless cars have hit the race tracks. Roborace, the autonomous race car maker, had its two self-driving'DevBots' compete against each other at the Formula E Buenos Aires ePrix. The race didn't go without its own surprises: One car had to dodge a random dog that ended up on the race track, and the other ended up hitting a barrier, unable to finish the race. Roborace's self-driving car races will take place at Formula E events throughout 2017. All cars competing will be made identically.
Take a joyride through a 3D urban neighborhood that looks like Tokyo, or New York, or maybe Rio de Janeiro -- all imagined by AI. We've introduced at this week's NeurIPS conference AI research that allows developers to render fully synthetic, interactive 3D worlds. While still early stage, this work shows promise for a variety of applications, including VR, autonomous vehicle development and architecture. The tech is among several NVIDIA projects on display here in Montreal. Attendees huddled around a green and black racing chair in our booth have been wowed by the demo, which lets drivers navigate around an eight-block world rendered by the neural network.