On the Role of Attention Masks and LayerNorm in Transformers

Open in new window