transformer block
Subcritical Signal Propagation at Initialization in Normalization-Free Transformers
We study signal propagation at initialization in transformers through the averaged partial Jacobian norm (APJN), a measure of gradient amplification across layers. We extend APJN analysis to transformers with bidirectional attention and permutation-symmetric input token configurations by deriving recurrence relations for activation statistics and APJNs across layers. Our theory predicts how attention modifies the asymptotic behavior of the APJN at large depth and matches APJNs measured in deep vision transformers. The criticality picture known from residual networks carries over to transformers: the pre-LayerNorm architecture exhibits power-law APJN growth, whereas transformers with LayerNorm replaced by elementwise $\tanh$-like nonlinearities have stretched-exponential APJN growth, indicating that the latter are subcritical. Applied to Dynamic Tanh (DyT) and Dynamic erf (Derf) transformers, the theory explains why these architectures can be more sensitive to initialization and optimization choices and require careful tuning for stable training.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > Italy > Sardinia (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.67)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.67)
- Asia > Singapore (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > Dominican Republic (0.04)
- (2 more...)
Supplemental Material
Figure 1: Overview of the Transformer block used in the PromptIR framework. As mentioned in section 3.1.2 Bias-free convolutions are utilized within this submodule. After MDT A Module the features are processed through the GDFN module. Our method effectively removes haze to produce visually better images.
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.67)