Deep Learning


What is TensorFlow? The machine learning library explained

#artificialintelligence

Machine learning is a complex discipline. But implementing machine learning models is far less daunting and difficult than it used to be, thanks to machine learning frameworks--such as Google's TensorFlow--that ease the process of acquiring data, training models, serving predictions, and refining future results. Created by the Google Brain team, TensorFlow is an open source library for numerical computation and large-scale machine learning. TensorFlow bundles together a slew of machine learning and deep learning (aka neural networking) models and algorithms and makes them useful by way of a common metaphor. It uses Python to provide a convenient front-end API for building applications with the framework, while executing those applications in high-performance C .


From one brain scan, more information for medical artificial intelligence: System helps machine-learning models glean training information for diagnosing and treating brain conditions

#artificialintelligence

An active new area in medicine involves training deep-learning models to detect structural patterns in brain scans associated with neurological diseases and disorders, such as Alzheimer's disease and multiple sclerosis. But collecting the training data is laborious: All anatomical structures in each scan must be separately outlined or hand-labeled by neurological experts. And, in some cases, such as for rare brain conditions in children, only a few scans may be available in the first place. In a paper presented at the recent Conference on Computer Vision and Pattern Recognition, the MIT researchers describe a system that uses a single labeled scan, along with unlabeled scans, to automatically synthesize a massive dataset of distinct training examples. The dataset can be used to better train machine-learning models to find anatomical structures in new scans -- the more training data, the better those predictions.


Essential tips for scaling quality AI data labeling

#artificialintelligence

Across every industry, engineers and scientists are in a race to clean and structure massive amounts of data for AI. Teams of computer vision engineers use labeled data to design and train the deep learning algorithms that self-driving cars use to recognize pedestrians, trees, street signs, and other vehicles. Data scientists are using labeled data and natural language processing (NLP) to automate legal contract review and predict patients who are at higher risk of chronic illness. The success of these systems depends on skilled humans in the loop, who label and structure the data for machine learning (ML). When data labeling is low quality, an ML model will struggle to learn.


NVIDIA Researchers Present Pixel Adaptive Convolutional Neural Networks at CVPR 2019 - NVIDIA Developer News Center

#artificialintelligence

Despite the widespread use of convolutional neural networks (CNN), the convolution operations used in standard CNNs have some limitations. To overcome these limitations, Researchers from NVIDIA and University of Massachusetts Amherst, developed a new type of convolutional operations that can dynamically adapt to input images to generate filters specific to the content. The researchers will present their work at the annual Computer Vision and Pattern Recognition (CVPR) conference in Long Beach, California this week. "Convolutions are the fundamental building blocks of CNNs," the researchers wrote in the research paper, "the fact that their weights are spatially shared is one of the main reasons for their widespread use, but it is also a major limitation, as it makes convolutions content-agnostic". To help improve the efficiency of CNNs, the team proposed a generalization of convolutional operation, Pixel-Adaptive Convolution (PAC), to mitigate the limitation.


New deepfake algorithm allows you to text-edit the words of a speaker in a video

#artificialintelligence

On the non-fingerprinting side of things, many, if not most, deep learning applications are already working on the problem of how to spot fakes. Indeed, with the Generative Adversarial Network approach, two networks compete against each other – one generating fake after fake, and another trying to pick the fakes from real inputs. Over millions of generations, the discerning network gets better at picking fakes, and the better it gets, the better the fake generating network has to become to fool it.


MIT's neural network aims to create the perfect pizza ZDNet

#artificialintelligence

Cooking well takes patience, time, practice, and skill, and so is it possible for a machine to do what professional human chefs take years to perfect? A new study in deep neural networking, titled "How to make a pizza: Learning a compositional layer-based GAN model" and recently published on arxiv.org The PizzaGAN project is described as an experiment in how to teach a machine to make a pizza by recognizing aspects of cooking, such as adding and subtracting ingredients or cooking the dish. The Generative Adversarial Network (GAN) deep learning model is trained to recognize these different steps and objects, and by doing so, is able to view a single image of a pizza, dissect and peel apart each object or change'layer,' and recreate a step-by-step guide to cook it. "Given only weak image-level supervision, the operators are trained to generate a visual layer that needs to be added to or removed from the existing image," the research paper explains.


Generative Adversarial Network(GAN) using Keras

#artificialintelligence

GAN is an unsupervised deep learning algorithm where we have a Generator pitted against an adversarial network called Discriminator. Discriminators are a team of cops trying to detect the counterfeit currency. Counterfeiters and cops both are trying to beat each other at their game. Generator's objective will be to generate data that is very similar to the training data. Data generated from Generator should be indistinguishable from the real data.


Using Deep Neural Networks to make YouTube Recommendations

#artificialintelligence

The recommendation system they designed has two stages. Candidate generation network takes events from users YouTube history. This can only provide broad personalization using collaborative filtering. These users are then compared by identifiers such as the number of videos watched, demographic information and search query tokens. The ranking network operates a little differently.


Japan's Fastest Supercomputer Adopts NGC, Enabling Easy Access to Deep Learning Frameworks

#artificialintelligence

From discovering drugs, to locating black holes, to finding safer nuclear energy sources, high performance computing systems around the world have enabled breakthroughs across all scientific domains. Japan's fastest supercomputer, ABCI, powered by NVIDIA Tensor Core GPUs, enables similar breakthroughs by taking advantage of AI. The system is the world's first large-scale, open AI infrastructure serving researchers, engineers and industrial users to advance their science. The software used to drive these advances is as critical as the servers the software runs on. However, installing an application on an HPC cluster is complex and time consuming.


PayPal Feeds the DL Beast with Huge Vault of Fraud Data

#artificialintelligence

PayPal is no stranger to fraud. As one of the Internet's first online payment services, PayPal has been exposed to every type of wire fraud imaginable (and some beyond imagination). Sometimes the fraudsters had the upper hand, but now, thanks to deep learning (DL) models running on high performance computing (HPC) infrastructure, PayPal is leveraging its vast repository of fraud data to keep the fraudsters on the run. PayPal is one of the classic success stories of the Internet era. Founded during the dot-com heyday of 1998, the company carved out a lucrative niche – facilitating secure payments online -- early in the Internet's development.