Revisiting Gradient Descent: A Dual-Weight Method for Improved Learning

Wang, Xi

arXiv.org Artificial Intelligence 

In neural networks, the weight vector W of a neuron plays a crucial role in transforming input features into outputs. While representing synaptic weights of postsynaptic neurons from presynaptic neurons, W can also be viewed as the neuron's encoding of the target concept it aims to represent. However, defining a target concept independently from other concepts often results in insufficient representation; rather, effective learning necessitates contrasting the target with non-targets. For instance, to accurately define a "dog," it is essential not only to understand the characteristics of dogs but also to distinguish them from non-dog entities. Without this contrast, differentiation remains incomplete. Similarly, when a neuron learns, it should capture the differences between the features of the target class (hereafter termed positive examples) and those of non-target classes (negative examples).