Goto

Collaborating Authors

Blockdrop to Accelerate Neural Network training by IBM Research

#artificialintelligence

IBM Research, with the help of the University of Texas Austin and the University of Maryland, has tried to expedite the performance of neural networks by creating technology, called BlockDrop. Behind the design of this technology lies the objective and promise of speeding up convolutional neural network operations without any loss of fidelity, which can offer a great savings of cost to the ML community. This could "further enhance and expedite the application and use as well as boost the performance of neural nets, leading to particularly in places and on cloud/edge servers with limited computing capability and power limitations". An increase in accuracy level have been accompanied by increasingly complex and deep network architectures. This presents a problem for domains where fast inference is essential, particularly in delay-sensitive and realtime scenarios such as autonomous driving, robotic navigation, or user-interactive applications on mobile devices.


Coding Convolutional Neural Networks (CNN) with TensorFlow

#artificialintelligence

CNNs are one of the state of the art, Artificial Neural Network design architecture, with one of the best deep learning tools in areas such as image recognition and classification. The Basic Principle behind the working of CNN is the idea of Convolution, producing filtered Feature Maps stacked over each other. We'll be using MNIST dataset which is readily available in different libraries. Code has been written in a generic template so as to do very minimal modifications and can run on many datasets with very little change. Every CNN is made up of multiple layers, the three main types of layers are convolutional, pooling, and fully-connected.


Blockdrop to Accelerate Neural Network training by IBM Research

#artificialintelligence

IBM Research, with the help of the University of Texas Austin and the University of Maryland, has created a technology, called BlockDrop, that promises to speed convolutional neural network operations without any loss of fidelity. This could further excel the use of neural nets, particularly in places with limited computing capability. Increase in accuracy level have been accompanied by increasingly complex and deep network architectures. This presents a problem for domains where fast inference is essential, particularly in delay-sensitive and realtime scenarios such as autonomous driving, robotic navigation, or user-interactive applications on mobile devices. Further research results show regularization techniques for fully connected layers, is less effective for convolutional layers, as activation units in these layers are spatially correlated and information can still flow through convolutional networks despite dropout.


Altitude Training: Strong Bounds for Single-Layer Dropout

Neural Information Processing Systems

Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes a theoretical explanation for this phenomenon: we show that, under a generative Poisson topic model with long documents, dropout training improves the exponent in the generalization bound for empirical risk minimization. Dropout achieves this gain much like a marathon runner who practices at altitude: once a classifier learns to perform reasonably well on training examples that have been artificially corrupted by dropout, it will do very well on the uncorrupted test set. We also show that, under similar conditions, dropout preserves the Bayes decision boundary and should therefore induce minimal bias in high dimensions. Papers published at the Neural Information Processing Systems Conference.


Generating Videos with Scene Dynamics

Neural Information Processing Systems

We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.