Deep Learning


Life Extension Daily News

#artificialintelligence

Computer algorithms analyzing digital pathology slide images were shown to detect the spread of cancer to lymph nodes in women with breast cancer as well as or better than pathologists, in a new study published online in the Journal of the American Medical Association.1 Researchers competed in an international challenge in 2016 to produce computer algorithms to detect the spread of breast cancer by analyzing tissue slides of sentinel lymph nodes, the lymph node closest to a tumor and the first place it would spread. The performance of the algorithms was compared against the performance of a panel of pathologists participating in a simulation exercise. Images of lymph node tissue sections used to test the ability of the deep learning algorithms to detect cancer metastasis. Specifically, in cross-sectional analyses that evaluated 32 algorithms, seven deep learning algorithms showed greater discrimination than a panel of 11 pathologists in a simulated time-constrained diagnostic setting, with an area under the curve of 0.994 (best algorithm) versus 0.884 (best pathologist). The study found that some computer algorithms were better at detecting cancer spread than pathologists in an exercise that mimicked routine pathology workflow.


Building an Audio Classifier using Deep Neural Networks

@machinelearnbot

Understanding sound is one of the basic tasks that our brain performs. This can be broadly classified into Speech and Non-Speech sounds. We have noise robust speech recognition systems in place but there is still no general purpose acoustic scene classifier which can enable a computer to listen and interpret everyday sounds and take actions based on those like humans do, like moving out of the way when we listen to a horn or hear a dog barking behind us etc. Our model is only as complex as our data, thus getting labelled'data is very important in machine learning'. The complexity of the Machine Learning systems arise from the data itself and not from the algorithms.


How to pass multiple inputs (features) to LSTM using Tensorflow?

@machinelearnbot

I have to predict the performance of an application. The inputs will be time series of past performance data of the application, CPU usage data of the server where application is hosted, the Memory usage data, network bandwidth usage etc. I'm trying to build a solution using LSTM which will take these input data and predict the performance of the application for next one week. I'm able to build a solution which takes one input, ie past performance data of the application. I'm currently stumbled at the part where I have to pass these multiple inputs.


These deep learning algorithms outperformed a panel of 11 pathologists

#artificialintelligence

During a 2016 simulation exercise, researchers evaluated the ability of 32 different deep learning algorithms to detect lymph node metastases in patients with breast cancer. Each algorithm's performance was then compared to that of a panel of 11 pathologists with time constraint (WTC). Overall, the team found that seven of the algorithms outperformed the panel of pathologists, publishing an in-depth analysis in JAMA. "To our knowledge, this is the first study that shows that interpretation of pathology images can be performed by deep learning algorithms at an accuracy level that rivals human performance," wrote lead author Babak Ehteshami Bejnordi, MS, Radboud University Medical Center in Nijmegen, the Netherlands, and colleagues. The simulation took place during the Cancer Metastases in Lymph Nodes Challenge 2016 (CAMELYON16) in the Netherlands.


Pruning AI networks without impacting performance

#artificialintelligence

In a spotlight paper from the 2017 NIPS Conference, my team and I presented an AI optimization framework we call Net-Trim, which is a layer-wise convex scheme to prune a pre-trained deep neural network. Deep learning has become a method of choice for many AI applications, ranging from image recognition to language translation. Thanks to algorithmic and computational advances, we are now able to train bigger and deeper neural networks resulting in increased AI accuracy. However, because of increased power consumption and memory usage, it is impractical to deploy such models on embedded devices with limited hardware resources and power constraints. One practical way to overcome this challenge is to reduce the model complexity without sacrificing accuracy.


NIPS 2017 -- Day 2 Highlights – Insight Data

@machinelearnbot

We are back with some highlights from the second day of NIPS. A lot of fascinating research was showcased today, and we are excited to share some of our favorites with you. If you missed them, feel free to check our Day 1 and Day 3 Highlights! One of the most memorable sessions of the first two days was today's invited talk by Kate Crawford, about bias in Machine Learning. We recommend taking a look at the feature image of this post, representing modern Machine Learning datasets as an attempt at creating a taxonomy of the world.


Why use RBF Learning rather than Deep Learning in an industrial environment

#artificialintelligence

One of today's most overused buzzword is "Artificial Intelligence". Both technical and general press is full of articles talking about machines that drive autonomous cars and invent new languages. Machine Learning is an essential part of the AI puzzle and Deep Learning is one of the most popular approaches to implement Machine Learning. Interestingly, Deep Learning is not new. Geoffrey Hinton demonstrated the use of back-propagation of errors for training multi-layer neural networks in 1986, more than 30 years ago.


Artificial intelligence promising for CA, retinopathy diagnoses

#artificialintelligence

Babak Ehteshami Bejnordi, from the Radboud University Medical Center in Nijmegen, Netherlands, and colleagues compared the performance of automated deep learning algorithms for detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer with pathologists' diagnoses in a diagnostic setting. The researchers found that the area under the receiver operating characteristic curve (AUC) ranged from 0.556 to 0.994 for the algorithms. The lesion-level, true-positive fraction achieved for the top-performing algorithm was comparable to that of the pathologist without a time constraint at a mean of 0.0125 false-positives per normal whole-slide image. Daniel Shu Wei Ting, M.D., Ph.D., from the Singapore National Eye Center, and colleagues assessed the performance of a DLS for detecting referable diabetic retinopathy and related eye diseases using 494,661 retinal images. The researchers found that the AUC of the DLS for referable diabetic retinopathy was 0.936, and sensitivity and specificity were 90.5 and 91.6 percent, respectively.


Accuracy of Artificial Intelligence Assessed in CA Diagnosis

#artificialintelligence

A deep learning algorithm can detect metastases in sections of lymph nodes from women with breast cancer; and a deep learning system (DLS) has high sensitivity and specificity for identifying diabetic retinopathy, according to two studies published online December 12 in the Journal of the American Medical Association. Babak Ehteshami Bejnordi, from the Radboud University Medical Center in Nijmegen, Netherlands, and colleagues compared the performance of automated deep learning algorithms for detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer with pathologists' diagnoses in a diagnostic setting. The researchers found that the area under the receiver operating characteristic curve (AUC) ranged from 0.556 to 0.994 for the algorithms. The lesion-level, true-positive fraction achieved for the top-performing algorithm was comparable to that of the pathologist without a time constraint at a mean of 0.0125 false-positives per normal whole-slide image. Daniel Shu Wei Ting, MD, PhD, from the Singapore National Eye Center, and colleagues assessed the performance of a DLS for detecting referable diabetic retinopathy and related eye diseases using 494,661 retinal images.


Understanding Hinton's Capsule Networks. Part I: Intuition.

#artificialintelligence

CNNs (convolutional neural networks) are awesome. They are one of the reasons deep learning is so popular today. They can do amazing things that people used to think computers would not be capable of doing for a long, long time. Nonetheless, they have their limits and they have fundamental drawbacks. Let us consider a very simple and non-technical example.