Goto

Collaborating Authors

 convolutional net


Introspective Classification with Convolutional Nets

Neural Information Processing Systems

We propose introspective convolutional networks (ICN) that emphasize the importance of having convolutional neural networks empowered with generative capabilities. We employ a reclassification-by-synthesis algorithm to perform training using a formulation stemmed from the Bayes theory. Our ICN tries to iteratively: (1) synthesize pseudo-negative samples; and (2) enhance itself by improving the classification. The single CNN classifier learned is at the same time generative --- being able to directly synthesize new samples within its own discriminative model. We conduct experiments on benchmark datasets including MNIST, CIFAR-10, and SVHN using state-of-the-art CNN architectures, and observe improved classification results.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Summary: This paper proposes a model for solving discriminative tasks with video inputs. The model consists of two convolutional nets. The input to one net is an appearance frame. The input to the second net is a stack of densely computed optical flow features. Each pathway is trained separately to classify its input.


Reviews: Introspective Classification with Convolutional Nets

Neural Information Processing Systems

The paper proposes a technique to improve the test accuracy of a discriminative model, by synthesizing additional negative input examples during the training process of the model. The negative example generation process has a Bayesian motivation, and is realized by "optimizing" for images (starting from random Gaussian noise) to maximize the probability of a given class label, a la DeepDream or Neural Artistic Style. These generated examples are added to the training set, and training is halted based on performance on a validation set. Experiments demonstrate that this procedure yields (very modest) improvements in test accuracy, and additionally provides some robustness against adversarial examples. The core idea is quite elegant, with an intuitive picture of using the "hard" negatives generated by the network to tighten the decision boundaries around the positive examples.


Convolutional Neural Networks: The Biologically-Inspired Model

#artificialintelligence

CNNs was popularized mostly thanks to the effort of Yann LeCun, now the Director of AI Research at Facebook. In the early 1990s, LeCun worked at Bell Labs, one of the most prestigious research labs in the world at that time, and built a check-recognition system to read handwritten digits. There's a very cool video dated back in 1993 that LeCun showed how the system work right here. This system was actually an entire process for doing end-to-end image recognition. The resulting paper, in which he co-authored with Leon Bottou, Patrick Haffner, and Yoshua Bengio in 1998, introduces convolutional nets as well as the full end-to-end system they built.


Image GPT

#artificialintelligence

We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting. Unsupervised and self-supervised learning, or learning without human-labeled data, is a longstanding challenge of machine learning. Recently, it has seen incredible success in language, as transformer models like BERT, GPT-2, RoBERTa, T5, and other variants have achieved top performance on a wide array of language tasks. However, the same broad class of models has not been successful in producing strong features for image classification.


Introspective Classification with Convolutional Nets

Jin, Long, Lazarow, Justin, Tu, Zhuowen

Neural Information Processing Systems

We propose introspective convolutional networks (ICN) that emphasize the importance of having convolutional neural networks empowered with generative capabilities. We employ a reclassification-by-synthesis algorithm to perform training using a formulation stemmed from the Bayes theory. Our ICN tries to iteratively: (1) synthesize pseudo-negative samples; and (2) enhance itself by improving the classification. The single CNN classifier learned is at the same time generative --- being able to directly synthesize new samples within its own discriminative model. We conduct experiments on benchmark datasets including MNIST, CIFAR-10, and SVHN using state-of-the-art CNN architectures, and observe improved classification results.


Can AI put humans back in the loop? ZDNet

#artificialintelligence

Is it possible to make artificial intelligence more trustworthy by inserting a human being into the decision process of machine learning? It may be, but you don't get something for nothing. That human being better be an individual who knows a lot about what the neural network is trying to figure out. And that presents a conundrum, given that one of the main promises of AI is precisely to find out things humans don't know. It's a conundrum that is sidestepped in a new bit of AI work by scientists at the Technische Universität Darmstadt in Germany.


Convolutional Neural Networks: The Biologically-Inspired Model

#artificialintelligence

CNNs was popularized mostly thanks to the effort of Yann LeCun, now the Director of AI Research at Facebook. In the early 1990s, LeCun worked at Bell Labs, one of the most prestigious research labs in the world at that time, and built a check-recognition system to read handwritten digits. There's a very cool video dated back in 1993 that LeCun showed how the system work right here. This system was actually an entire process for doing end-to-end image recognition. The resulting paper, in which he co-authored with Leon Bottou, Patrick Haffner, and Yoshua Bengio in 1998, introduces convolutional nets as well as the full end-to-end system they built.


Reaching New Heights with Artificial Neural Networks

Communications of the ACM

Once treated by the field with skepticism (if not outright derision), the artificial neural networks that 2018 ACM A.M. Turing Award recipients Geoffrey Hinton, Yann LeCun, and Yoshua Bengio spent their careers developing are today an integral component of everything from search to content filtering. Here, the three researchers share what they find exciting, and which challenges remain. There's so much more noise now about artificial intelligence than there was when you began your careers--some of it well-informed, some not. What do you wish people would stop asking you? GEOFFREY HINTON: "Is this just a bubble?"


Convolutional Neural Networks: The Biologically-Inspired Model Codementor

#artificialintelligence

Since then, this competition has become the benchmark arena where state-of-the-art computer vision models are introduced. In particular, there have been many competing models using deep Convolutional Neural Nets as their backbone architecture. The most popular ones that achieved excellent results in the ImageNet competition include: ZFNet (2013), GoogLeNet (2014), VGGNet (2014), ResNet (2015), DenseNet (2016), etc. These architectures were getting deeper and deeper year by year.