Rapid Feature Learning with Stacked Linear Denoisers

arXiv.org Artificial Intelligence

We investigate unsupervised pre-training of deep architectures as feature generators for "shallow" classifiers. Stacked Denoising Autoencoders (SdA), when used as feature pre-processing tools for SVM classification, can lead to significant improvements in accuracy - however, at the price of a substantial increase in computational cost. In this paper we create a simple algorithm which mimics the layer by layer training of SdAs. However, in contrast to SdAs, our algorithm requires no training through gradient descent as the parameters can be computed in closed-form. It can be implemented in less than 20 lines of MATLABTMand reduces the computation time from several hours to mere seconds. We show that our feature transformation reliably improves the results of SVM classification significantly on all our data sets - often outperforming SdAs and even deep neural networks in three out of four deep learning benchmarks.


CeliacNet: Celiac Disease Severity Diagnosis on Duodenal Histopathological Images Using Deep Residual Networks

arXiv.org Machine Learning

Celiac Disease (CD) is a chronic autoimmune disease that affects the small intestine in genetically predisposed children and adults. Gluten exposure triggers an inflammatory cascade which leads to compromised intestinal barrier function. If this enteropathy is unrecognized, this can lead to anemia, decreased bone density, and, in longstanding cases, intestinal cancer. The prevalence of the disorder is 1% in the United States. An intestinal (duodenal) biopsy is considered the "gold standard" for diagnosis. The mild CD might go unnoticed due to non-specific clinical symptoms or mild histologic features. In our current work, we trained a model based on deep residual networks to diagnose CD severity using a histological scoring system called the modified Marsh score. The proposed model was evaluated using an independent set of 120 whole slide images from 15 CD patients and achieved an AUC greater than 0.96 in all classes. These results demonstrate the diagnostic power of the proposed model for CD severity classification using histological images.


What is Deep Learning? - Machine Learning Mastery

#artificialintelligence

Geoffrey Hinton is a pioneer in the field of artificial neural networks and co-published the first paper on the backpropagation algorithm for training multilayer perceptron networks. He may have started the introduction of the phrasing "deep" to describe the development of large artificial neural networks. He co-authored a paper in 2006 titled "A Fast Learning Algorithm for Deep Belief Nets" in which they describe an approach to training "deep" (as in a many layered network) of restricted Boltzmann machines. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. This paper and the related paper Geoff co-authored titled "Deep Boltzmann Machines" on an undirected deep network were well received by the community (now cited many hundreds of times) because they were successful examples of greedy layer-wise training of networks, allowing many more layers in feedforward networks.


Deep learning definition, algorithms, models, applications & advantages Science online

#artificialintelligence

Deep learning is also known as deep structured learning or hierarchical learning, It is part of a broader family of machine learning methods based on the layers used in artificial neural networks, Deep learning is a subset of the field of machine learning, which is a subfield of AI, Deep learning applications are used in industries from automated driving to medical devices. It is a class of machine learning algorithms that use a cascade of multiple layers of nonlinear processing units for feature extraction and transformation, Each successive layer uses the output from the previous layer as input, It can be learned in supervised (e.g., classification) and/or unsupervised (e.g., pattern analysis) manners, It enables computational models which are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. It is a subfield of machine learning concerned with algorithms inspired by the structure & function of the brain called artificial neural networks, It can teach computers to do what comes naturally to humans: learn by example, Deep learning can be used in driverless cars, allowing them to recognize the stop sign, or to distinguish the pedestrian from the lamppost. The computer model learns to perform classification tasks from images, text, or sound, Deep learning models can achieve state of art accuracy, sometimes exceeding human-level performance, Models are trained by using a large set of labeled data & neural network architectures that have many layers. Neural networks are static & symbolic, They were inspired by information processing & distributed communication nodes in biological systems synaptic structures, they have many differences from the structural & functional properties of biological brains, that make them incompatible with the neurological evidence, while the biological brain of most living organisms is dynamic (plasticity) and analog.