Fernandez, Alfredo
TeLU Activation Function for Fast and Stable Deep Learning
Fernandez, Alfredo, Mali, Ankur
We propose the Hyperbolic Tangent Exponential Linear Unit (TeLU), a neural network hidden activation function defined as TeLU(x)=xtanh(exp(x)). TeLU's design is grounded in the core principles of key activation functions, achieving strong convergence by closely approximating the identity function in its active region while effectively mitigating the vanishing gradient problem in its saturating region. Its simple formulation enhances computational efficiency, leading to improvements in scalability and convergence speed. Unlike many modern activation functions, TeLU seamlessly combines the simplicity and effectiveness of ReLU with the smoothness and analytic properties essential for learning stability in deep neural networks. TeLU's ability to mimic the behavior and optimal hyperparameter settings of ReLU, while introducing the benefits of smoothness and curvature, makes it an ideal drop-in replacement. Its analytic nature positions TeLU as a powerful universal approximator, enhancing both robustness and generalization across a multitude of experiments. We rigorously validate these claims through theoretical analysis and experimental validation, demonstrating TeLU's performance across challenging benchmarks; including ResNet18 on ImageNet, Dynamic-Pooling Transformers on Text8, and Recurrent Neural Networks (RNNs) on the Penn TreeBank dataset. These results highlight TeLU's potential to set a new standard in activation functions, driving more efficient and stable learning in deep neural networks, thereby accelerating scientific discoveries across various fields.
Stable and Robust Deep Learning By Hyperbolic Tangent Exponential Linear Unit (TeLU)
Fernandez, Alfredo, Mali, Ankur
In the rapidly evolving landscape of neural networks, the choice of activation function plays a pivotal role in model performance and stability. While the Rectified Linear Unit (ReLU) [6, 20] has long been the cornerstone of numerous deep learning architectures [25, 8, 26] due to its simplicity and effectiveness in mitigating the vanishing gradient problem [10, 11], it is not without limitations. Particularly, ReLU suffers from the "dying ReLU" issue [18], where neurons can become inactive and cease to contribute to the learning process, potentially leading to suboptimal models. Enter the Gaussian Error Linear Unit (GELU) [9] and Mish [19] activation functions, which have emerged as sophisticated alternatives, addressing some of ReLU's shortcomings. GELU, leveraging the properties of the Gaussian distribution, offers a smooth, non-linear transition in its activation, which can lead to improved learning dynamics [27, 4, 15]. Mish, further building on this concept, introduces a self-gating mechanism, enabling a smoother information flow.