Goto

Collaborating Authors

 Deep Learning


Semiconductor Engineering .:. Convolutional Neural Networks Power Ahead

#artificialintelligence

While the term may not be immediately recognizable, convolutional neural networks (CNNs) are already part of our daily lives--and they are expected to become even more significant in the near future. Convolutional neural networks are a form of machine learning modeled on the way the brain's visual cortex distinguishes one object from another. That helps explain why the most common use today is for image recognition, which is where this market is gaining real traction today. But it also has many potential uses far beyond image recognition as power is reduced and performance is improved. "The work that happens at a place like Facebook or Yahoo or Google to try to find you in all those pictures people are uploading, that's largely done with neural networks already," observed Drew Wingard, CTO of Sonics, noting that driver assistance uses similar technology.



Do deep neural networks have more local minimums? • /r/MachineLearning

@machinelearnbot

I have heard that training deep networks can be difficult due to local minima. If you are training two neural networks with the same data. Where one of the networks is deeper (more hidden layers) than the other. Will the deeper network contain more local minima or is it impossible to say when only considering how deep the network is?


Share Your Science: Leveraging Deep Learning for Personalized Drug Treatment Recommendations

#artificialintelligence

David Ledbetter, data scientist at the Children's Hospital Los Angeles, shares how his team is using TITAN X GPUs and deep learning to help provide better recommendations of drug treatments for children in their pediatric intensive care unit. To train their models, 13,000 patient snapshots were created from ten years of electronic health records at the hospital to understand the interactions between a patient's vital state, heart rate, blood pressure and the treatments they were given. By understanding the most important relationships in the data, they are then able to generate the probability of survival predictions for the patients moving forward as well as physiology predictions in order to simulate augmented treatments. David presented his research poster "Dr. Watch more scientists and researchers share how accelerated computing is benefiting their work at http://nvda.ly/X7WpH



iclr2016:main

#artificialintelligence

The problem of building an autonomous robot has traditionally been viewed as one of integration: connecting together modular components, each one designed to handle some portion of the perception and decision making process. For example, a vision system might be connected to a planner that might in turn provide commands to a low-level controller that drives the robot's motors. In this talk, I will discuss how ideas from deep learning can allow us to build robotic control mechanisms that combine both perception and control into a single system. This system can then be trained end-to-end on the task at hand. I will show how this end-to-end approach actually simplifies the perception and control problems, by allowing the perception and control mechanisms to adapt to one another and to the task.


Churn analysis using deep convolutional neural networks and autoencoders

arXiv.org Machine Learning

Customer temporal behavioral data was represented as images in order to perform churn prediction by leveraging deep learning architectures prominent in image classification. Supervised learning was performed on labeled data of over 6 million customers using deep convolutional neural networks, which achieved an AUC of 0.743 on the test dataset using no more than 12 temporal features for each customer. Unsupervised learning was conducted using autoencoders to better understand the reasons for customer churn. Images that maximally activate the hidden units of an autoencoder trained with churned customers reveal ample opportunities for action to be taken to prevent churn among strong data, no voice users.


First Contact With TensorFlow Prof. Jordi Torres – UPC & BSC

#artificialintelligence

In TensorFlow, during the training process of the models, the parameters are maintained in the memory as variables. When a variable is created, you can use a tensor defined as a parameter of the function as an initial value, which can be a constant or a random value.



Artificial intelligence finds cancer cells more efficiently

#artificialintelligence

The "photonic time stretch" was invented by Professor Barham Jalali, who holds a patent for this technology, and its use in microscopes is just one of many possible applications. It works by taking pictures of flowing blood cells using laser bursts in the way that a camera uses a flash. This process happens so quickly – in nanoseconds, or billionths of a second – that the images would be too weak to be detected and too fast to be digitised by normal instrumentation. The new microscope overcomes those challenges using specially designed optics that boost the clarity of the images and simultaneously slow them enough to be detected and digitised at a rate of 36 million images per second. It then uses deep learning to distinguish the cancer cells from healthy white blood cells.