Deep Learning has benefited primarily and continues to do so thanks to the pioneering works of Geoff Hinton, Yann Lecun and Yoshua Bengio in the late 1980s. Contributions of Yann Lecun, especially in developing convolutional neural networks and their applications in computer vision and other areas of artificial intelligence form the basis of many products and services deployed across most technology companies today. Here are a few of Yann's groundbreaking research papers that have contributed greatly to this field: The ability of neural networks to generalize can be greatly enhanced by providing constraints from the task domain. As a follow up to his widely popular work on back-prop, in this paper, Yann and his peers demonstrate how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the US Postal Service.
If you are a newcomer to the Deep Learning area, the first question you may have is "Which paper should I start reading from?" Here is a reading roadmap of Deep Learning papers! You will find many papers that are quite new but really worth reading. I would continue adding papers to this roadmap. Editor: What follows is a portion of the papers from this list.
Most deep learning techniques are extensions or adaptions of ANN's, called deep nets. Different configurations of deep nets are suitable for different machine learning tasks: Restricted Boltzman Machines (RBM's) (Smolensky 1986; Hinton & Salakhutdinov, 2006) and Autoencoders (Vincent, Larochelle, Bengio, & Manzagol,2008) are the main deep learning techniques for finding patterns in unlabeled data. This includes tasks such as feature extraction, pattern recognition and other unsupervised learning settings.
In the late '90s, Tomi Poutanen, a precocious computer whiz from Finland, hoped to do his dissertation on neural networks, a scientific method aimed at teaching computers to act and think like humans. As a student at the University of Toronto, it was a logical choice. Geoffrey Hinton, the godfather of neural network research, taught and ran a research lab there. But instead of encouraging Poutanen, who went on to work at Yahoo and recently co-founded media startup Milq, one of his professors sent a stern warning about taking the academic path known as deep learning. "Smart scientists," his professor cautioned, "go there to see their careers end."
The 2018 Turing Award, known as the "Nobel Prize of computing," has been given to a trio of researchers who laid the foundations for the current boom in artificial intelligence. Yoshua Bengio, Geoffrey Hinton, and Yann LeCun -- sometimes called the'godfathers of AI' -- have been recognized with the $1 million annual prize for their work developing the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses. In fact, you probably interacted with the descendants of Bengio, Hinton, and LeCun's algorithms today -- whether that was the facial recognition system that unlocked your phone, or the AI language model that suggested what to write in your last email.