Goto

Collaborating Authors

5 Groundbreaking Papers That Are Testimony To Yann Lecun's Ingenuity

#artificialintelligence

Deep Learning has benefited primarily and continues to do so thanks to the pioneering works of Geoff Hinton, Yann Lecun and Yoshua Bengio in the late 1980s. Contributions of Yann Lecun, especially in developing convolutional neural networks and their applications in computer vision and other areas of artificial intelligence form the basis of many products and services deployed across most technology companies today. Here are a few of Yann's groundbreaking research papers that have contributed greatly to this field: The ability of neural networks to generalize can be greatly enhanced by providing constraints from the task domain. As a follow up to his widely popular work on back-prop, in this paper, Yann and his peers demonstrate how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the US Postal Service.


Deep Learning Papers Reading Roadmap

@machinelearnbot

If you are a newcomer to the Deep Learning area, the first question you may have is "Which paper should I start reading from?" Here is a reading roadmap of Deep Learning papers! You will find many papers that are quite new but really worth reading. I would continue adding papers to this roadmap. Editor: What follows is a portion of the papers from this list.


AAAI 2020 A Turning Point for Deep Learning? Hinton, LeCun, and Bengio Might Have Different Approaches

#artificialintelligence

This is an updated version. The Godfathers of AI and 2018 ACM Turing Award winners Geoffrey Hinton, Yann LeCun, and Yoshua Bengio shared a stage in New York on Sunday night at an event organized by the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020). The trio of researchers have made deep neural networks a critical component of computing, and in individual talks and a panel discussion they discussed their views on current challenges facing deep learning and where it should be heading. Introduced in the mid 1980s, deep learning gained traction in the AI community the early 2000s. The year 2012 saw the publication of the CVPR paper Multi-column Deep Neural Networks for Image Classification, which showed how max-pooling CNNs on GPUs could dramatically improve performance on many vision benchmarks; while a similar system introduced months later by Hinton and a University of Toronto team won the large-scale ImageNet competition by a significant margin over shallow machine learning methods.


AAAI 2020 A Turning Point for Deep Learning?

#artificialintelligence

This is an updated version. The Godfathers of AI and 2018 ACM Turing Award winners Geoffrey Hinton, Yann LeCun, and Yoshua Bengio shared a stage in New York on Sunday night at an event organized by the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020). The trio of researchers have made deep neural networks a critical component of computing, and in individual talks and a panel discussion they discussed their views on current challenges facing deep learning and where it should be heading. Introduced in the mid 1980s, deep learning gained traction in the AI community the early 2000s. The year 2012 saw the publication of the CVPR paper Multi-column Deep Neural Networks for Image Classification, which showed how max-pooling CNNs on GPUs could dramatically improve performance on many vision benchmarks; while a similar system introduced months later by Hinton and a University of Toronto team won the large-scale ImageNet competition by a significant margin over shallow machine learning methods.


Understanding deep learning in 5 minutes

#artificialintelligence

Most deep learning techniques are extensions or adaptions of ANN's, called deep nets. Different configurations of deep nets are suitable for different machine learning tasks: Restricted Boltzman Machines (RBM's) (Smolensky 1986; Hinton & Salakhutdinov, 2006) and Autoencoders (Vincent, Larochelle, Bengio, & Manzagol,2008) are the main deep learning techniques for finding patterns in unlabeled data. This includes tasks such as feature extraction, pattern recognition and other unsupervised learning settings.