Deep Learning
Towards Bayesian Deep Learning: A Survey
While perception tasks such as visual object recognition and text understanding play an important role in human intelligence, the subsequent tasks that involve inference, reasoning and planning require an even higher level of intelligence. The past few years have seen major advances in many perception tasks using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. To achieve integrated intelligence that involves both perception and inference, it is naturally desirable to tightly integrate deep learning and Bayesian models within a principled probabilistic framework, which we call Bayesian deep learning. In this unified framework, the perception of text or images using deep learning can boost the performance of higher-level inference and in return, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a general introduction to Bayesian deep learning and reviews its recent applications on recommender systems, topic models, and control. In this survey, we also discuss the relationship and differences between Bayesian deep learning and other related topics like Bayesian treatment of neural networks.
Salesforce Investing In AI, Deep Learning - InformationWeek
Salesforce has quietly been amassing talent in the artificial intelligence domain, most recently with the acquisition of MetaMind this week, a company working on deep learning for automated image recognition. Salesforce CEO Marc Benioff has backed the Palo Alto, California-based startup almost from the beginning, participating in an 8 million venture round in December 2014 along with Kholsa Ventures. The deal follows a number of other buys by Salesforce of companies including machine learning startup PredictionIO, data science for enterprises company MinHash, and a "smart" iPhone calendar app called Tempo AI that automatically added context such as contacts and documents to calendar items. Salesforce has also hired away some of LinkedIn's data science talent. Forrester Research principal analyst Mike Gualtieri told InformationWeek in an interview that Salesforce is keeping pace with consumer-focused Internet giants like Google and Facebook with these acquisitions.
Question about loss clipping on DeepMind's DQN โข /r/MachineLearning
I am trying my own implementation of the DQN paper by Deepmind in tensor flow and am running into difficulty with clipping of the loss function. We also found it helpful to clip the error term from the update to be between 1 and 1. Because the absolute value loss function x has a derivative of 1 for all negative values of x and a derivative of 1 for all positive values of x, clipping the squared error to be between 1 and 1 corresponds to using an absolute value loss function for errors outside of the ( 1,1) interval. This form of error clipping further improved the stability of the algorithm. The agent is not learning the proper policy in this case.
Maluuba opens deep learning research lab in Montreal
The research lab will be led by Maluuba's CTO, Kaheer Suleman, and will be staffed by 13 deep learning research scientists. Maluuba has also partnered with reinforcement learning expert Richard Sutton, a principal investigator from the Alberta Innovates Centre for Machine Learning and an Association for the Advancement of Artificial Intelligence Fellow. "Maluuba is working with leading experts and the world's premiere academic centre for deep learning to design systems that can represent knowledge and answer questions in natural language. The potential applications of this research are staggering." The company counts LG as one of its customers.
Boston Limited Unveils Cloud-Based Deep Learning Solution
As part of its exhibition at GTC 2016, the worlds largest GPU conference, Boston Limited is showcasing Boston ANNA, the worlds fastest deep learning training accelerator. Expert scientists in the field of machine learning have leveraged the power of the GPU to make huge strides in improving a multitude of applications. Deep Learning is the fastest-growing field within this sphere and today's advanced deep neural networks use algorithms, big data, and the computational power of GPUs to reduce time-to-solution or to improve the accuracy of results. Deep learning is used in the research community and in industry to help solve many big data problems such as computer vision, speech recognition, and natural language processing. Models can take days or even weeks to train, forcing data scientists to make compromises between accuracy and time to deployment.
NVIDIA bets big on AI with powerful new chip
NVIDIA has released a new state-of-the-art chip that pushes the limits of machine learning. The Tesla P100 GPU, which CEO Jen-Hsun Huang revealed yesterday at NVIDIA's annual GPU Technology Conference, can perform deep learning neural network tasks 12 times faster than NVIDIA's previous top-end system. The P100 was a huge commitment for NVIDIA, costing over 2 billion in research and development, and it sports a whopping 150 billion transistors on a single chip, making the P100 the world's largest chip, NVIDIA claims. In addition to machine learning, the P100 will work for all sorts of high performance computing tasks -- NVIDIA just wants you to know it's really good at machine learning . To top off the P100's introduction, NVIDIA has packed eight of them into a crazy-powerful 129,000 supercomputer called the DGX-1, which was also announced yesterday.
Nvidia's Huang Expounds A.I. Vision: 'We're No Longer a Co-Processor!' Is It Priced In?
Shares of graphics chip maker Nvidia (NVDA) are down 6 cents at 35.69, following yesterday's annual meeting with analysts. A webcast replay of presentations CEO Jen-Hsun Huang and other executives, and the Q&A, can be viewed from the company's investor relations page. Huang made the pitch that with new frontiers of machine learning and artificial intelligence, Nvidia "are no longer a co-processor," meaning a handmaid to the PC microprocessor. "There is no workload we run," said Huang, such as a video game. Instead, he said, with the company's programming technology, "CUDA," "we run an application that a developer writes on top of it."
Insilico Medicine to present deep learned biomarkers at the Deep Learning in Healthcare Summit
Baltimore, MD - Alex Zhavoronkov, PhD, CEO of Insilico Medicine will present a range of deep learned biomarkers of ageing and deep learned predictors of biological age at the RE-WORK Deep Learning in Healthcare Summit in London, 7-8th of April. The first such predictor is already available online at http://www.Aging.AI trained on hundreds of thousands of human biochemistry and cell count samples linked to chronological age, gender and health status. Transcriptomic and signalomic ageing markers and predictors of chronological and biological age and cross-species comparison will be discussed. "RE-WORK summits are clearly outperforming most industry conferences in agility, openness, diversity and focus on applications of deep learning in multiple areas and we are happy to be invited to present at their Deep Learning in Healthcare Summit in London. Artificial intelligence will transform biomarker development and drug discovery much sooner than most pharmaceutical companies and regulators expect and we are happy to be at the forefront of this emerging trend", said Alex Zhavoronkov, PhD, CEO or Insilico Medicine, Inc.
What Is Local Response Normalization In Convolutional Neural Networks
Convolutional Neural Networks (CNNs) have been doing wonders in the field of image recognition in recent times. CNN is a type of deep neural network in which the layers are connected using spatially organized patterns. This is in line with how the human visual cortex processes image data. Researchers have been working on coming up with better architectures over the last few years. In this blog post, we will discuss a particular type of layer that has been used consistently across many famous architectures.
Nvidia goes all in on AI
The idea of using GPUs for more than just fun and games is nothing new. It started with niche high-performance computing applications such as seismic data processing for oil and gas, fluid dynamics simulations and options pricing. But now Nvidia thinks it has found its killer app in the form of deep learning. "I think we are going to realize looking back that one of the biggest things that ever happened is AI," CEO Jen-Hsun Huang said in his opening keynote at this year's GPU Technology Conference. "We think this is a new computing model, a fundamentally different approach to developing software."