ULU: A Unified Activation Function
–arXiv.org Artificial Intelligence
We propose \textbf{ULU}, a novel non-monotonic, piecewise activation function defined as $\{f(x;α_1),x<0; f(x;α_2),x>=0 \}$, where $f(x;α)=0.5x(tanh(αx)+1),α>0$. ULU treats positive and negative inputs differently. Extensive experiments demonstrate ULU significantly outperforms ReLU and Mish across image classification and object detection tasks. Its variant Adaptive ULU (\textbf{AULU}) is expressed as $\{f(x;β_1^2),x<0; f(x;β_2^2),x>=0 \}$, where $β_1$ and $β_2$ are learnable parameters, enabling it to adapt its response separately for positive and negative inputs. Additionally, we introduce the LIB (Like Inductive Bias) metric from AULU to quantitatively measure the inductive bias of the model.
arXiv.org Artificial Intelligence
Aug-8-2025