neural network

Machine Learning & Artificial Intelligence: Main Developments in 2017 and Key Trends in 2018


At KDnuggets, we try to keep our finger on the pulse of main events and developments in industry, academia, and technology. We also do our best to look forward to key trends on the horizon. To close out 2017, we recently asked some of the leading experts in Big Data, Data Science, Artificial Intelligence, and Machine Learning for their opinion on the most important developments of 2017 and key trends they 2018. This post, the first in this series of such year-end wrap-ups, considers what happened in Machine Learning & Artificial Intelligence this year, and what may be on the horizon for 2018. "What were the main machine learning & artificial intelligence related developments in 2017, and what key trends do you see in 2018?"

Deep Learning for Disaster Recovery – Insight Data


With global climate change, devastating hurricanes are occurring with higher frequency. After a hurricane, roads are often flooded or washed out, making them treacherous for motorists. According to The Weather Channel, almost two of every three U.S. flash flood deaths from 1995–2010, excluding fatalities from Hurricane Katrina, occurred in vehicles. During my Insight A.I. Fellowship, I designed a system that detects flooded roads and created an interactive map app. Using state of the art computer vision deep learning methods, the system automatically annotates flooded, washed out, or otherwise severely damaged roads from satellite imagery.

Google's AIY Vision Kit Augments Pi With Vision Processor


Google has announced their soon to be available Vision Kit, their next easy to assemble Artificial Intelligence Yourself (AIY) product. You'll have to provide your own Raspberry Pi Zero W but that's okay since what makes this special is Google's VisionBonnet board that they do provide, basically a low power neural network accelerator board running TensorFlow. The VisionBonnet is built around the Intel Movidius Myriad 2 (aka MA2450) vision processing unit (VPU) chip. See the video below for an overview of this chip, but what it allows is the rapid processing of compute-intensive neural networks. We don't think you'd use it for training the neural nets, just for doing the inference, or in human terms, for making use of the trained neural nets.

Artificial intelligence and deep learning could usher in 24/7 advisors The Insurance and Investment Journal


Artificial intelligence and deep learning may soon deliver 24/7 advisors that double as insurance fraud detectors. First, insurers must learn to use data. Stéphane Tremblay, team leader at the National Research Council's (NRC) Data Analytics Centre, specializes in automated learning. The Analytics Centre helps businesses in all sectors face the challenge of big data. The NRC invests $1 billion in research and development each year.

Distributing control of deep learning training delivers 10x performance improvement


My IBM Research AI team and I recently completed the first formal theoretical study of the convergence rate and communications complexity associated with a decentralized distributed approach in a deep learning training setting. The empirical evidence proves that in specific configurations, a decentralized approach can result in a 10x performance boost over a centralized approach without additional complexity. A paper describing our work has been accepted for oral presentation at the NIPS 2017 Conference, one of the 40 out of 3240 submissions selected for this. Supervised machine learning generally consists of two phases: 1) training (building a model) and 2) inference (making predictions with the model). The training phase involves finding optimal values for a model's parameters such that error on a set of training examples is minimized, and the model generalizes to new data.

Generalization Theory and Deep Nets, An introduction


Deep learning holds many mysteries for theory, as we have discussed on this blog. Lately many ML theorists have become interested in the generalization mystery: why do trained deep nets perform well on previously unseen data, even though they have way more free parameters than the number of datapoints (the classic "overfitting" regime)? Zhang et al.'s paper Understanding Deep Learning requires Rethinking Generalization played some role in bringing attention to this challenge. Their main experimental finding is that if you take a classic convnet architecture, say Alexnet, and train it on images with random labels, then you can still achieve very high accuracy on the training data. Needless to say, the trained net is subsequently unable to predict the (random) labels of still-unseen images, which means it doesn't generalize.

The Rise of Artificial Intelligence through Deep Learning Yoshua Bengio TEDxMontreal


A revolution in AI is occurring thanks to progress in deep learning. How far are we towards the goal of achieving human-level AI? What are some of the main challenges ahead? Yoshua Bengio believes that understanding the basics of AI is within every citizen's reach. That democratizing these issues is important so that our societies can make the best collective decisions regarding the major changes AI will bring, thus making these changes beneficial and advantageous for all.

Life Extension Daily News


Computer algorithms analyzing digital pathology slide images were shown to detect the spread of cancer to lymph nodes in women with breast cancer as well as or better than pathologists, in a new study published online in the Journal of the American Medical Association.1 Researchers competed in an international challenge in 2016 to produce computer algorithms to detect the spread of breast cancer by analyzing tissue slides of sentinel lymph nodes, the lymph node closest to a tumor and the first place it would spread. The performance of the algorithms was compared against the performance of a panel of pathologists participating in a simulation exercise. Images of lymph node tissue sections used to test the ability of the deep learning algorithms to detect cancer metastasis. Specifically, in cross-sectional analyses that evaluated 32 algorithms, seven deep learning algorithms showed greater discrimination than a panel of 11 pathologists in a simulated time-constrained diagnostic setting, with an area under the curve of 0.994 (best algorithm) versus 0.884 (best pathologist). The study found that some computer algorithms were better at detecting cancer spread than pathologists in an exercise that mimicked routine pathology workflow.

Deep learning and artificial intelligence: Making a big deal of big data


AWS DeepLens Looking for a new way to learn machine learning? Let a machine teach you with AWS DeepLens, the world's first deep learning enabled video camera for developers. Designed to connect securely to a variety of AWS offerings, including AWS IoT, Amazon SQS, Amazon SNS, and Amazon DynamoDB, AWS DeepLens uses Amazon Kinesis Video Streams to stream video back to AWS and Amazon Rekognition Video to apply advanced video analytics. Easy to customize and fully programmable with AWS Lambda, AWS DeepLens runs on any deep learning framework, including TensorFlow and Caffe. Amazon SageMaker Amazon SageMaker offers developers and data scientists a quick and simple way to build, train, and deploy machine learning models at any scale.

Apple Releases Turi ML Software as Open Source


Apple last week released Turi Create, an open source package that it says will make it easy for mobile app developers to infuse machine learning into their products with just a few lines of code. "You don't have to be a machine learning expert to add recommendations, object detection, image classification, image similarity, or activity classification to your app," the company says in the GitHub description for Turi Create. From a desktop computer running macOS, Linux, or Windows, Turi Create allows users to apply several machine learning algorithms, including classifiers (like nearest neighbor, SVM, random forests); regression (logistic regression, boosted decision trees); graph analytics (PageCount, K-Core decomposition, triangle count); clustering (K-Means, DBSCAN); and topic models. The software automates the application of the algorithms to a variety of input data, including text, images, audio, video, and sensor data. Users can work with large data sets with a single machine, Apple says.