We revisit fuzzy neural network with a cornerstone notion of generalized hamming distance, which provides a novel and theoretically justified framework to re-interpret many useful neural network techniques in terms of fuzzy logic. In particular, we conjecture and empirically illustrate that, the celebrated batch normalization (BN) technique actually adapts the "normalized" bias such that it approximates the rightful bias induced by the generalized hamming distance. Once the due bias is enforced analytically, neither the optimization of bias terms nor the sophisticated batch normalization is needed. Also in the light of generalized hamming distance, the popular rectified linear units (ReLU) can be treated as setting a minimal hamming distance threshold between network inputs and weights. This thresholding scheme, on the one hand, can be improved by introducing double-thresholding on both positive and negative extremes of neuron outputs.

Joseph, K J, R, Vamshi Teja, Singh, Krishnakant, Balasubramanian, Vineeth N

Mini-batch gradient descent based methods are the de facto algorithms for training neural network architectures today. We introduce a mini-batch selection strategy based on submodular function maximization. Our novel submodular formulation captures the informativeness of each sample and diversity of the whole subset. We design an efficient, greedy algorithm which can give high-quality solutions to this NP-hard combinatorial optimization problem. Our extensive experiments on standard datasets show that the deep models trained using the proposed batch selection strategy provide better generalization than Stochastic Gradient Descent as well as a popular baseline sampling strategy across different learning rates, batch sizes, and distance metrics.

It is a gradient descent algorithm for classification implemented from scratch using numpy library. It is good practice to shuffle data at first numpy.random.shuffle() Mini Batch Size is size of input data flowing through network at a time for calculating error as a whole Learning Rate Alpha decides the rate at which, weights and biases will update while back propagation Number of Epochs decides number of times, the whole dataset will be used to train the network Set Mini Batch Size to 1/10th of total data available. And update it manually after every train of network to find its optimum value Alpha should be selected such that learning isn't very slow as well as it didn't take long jump or else, network will start diverging from local minima Number of epochs are selected such that network don't overfit itself over noise In ANN, output will depend on every neuron it pass through For output layer, we have label according to which, it is possible to find it's expected value But for all other layers, there is no single solution available So, finding optimum value is little harder for that

Like the introduction of the ReLU activation unit, batch normalization -BN for short- has changed the learning landscape a lot. Despite some reports that it might not always improve the learning a lot, it is still a very powerful tool that gained a lot of acceptance recently. No doubt that BN has been already used for autoencoder -AE- and friends, but most of the literature is focused on supervised learning. Thus, we would like to summarize our results for the domain of textual sparse input data, starting with warm-up that is soon followed by more details. Because BN is applied before the (non-linear) activation, we will introduce some notation to illustrate the procedure.

Hoffer, Elad, Ben-Nun, Tal, Hubara, Itay, Giladi, Niv, Hoefler, Torsten, Soudry, Daniel

Large-batch SGD is important for scaling training of deep neural networks. However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered. We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling. We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of deep neural networks and datasets. Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art. Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently.