Reducing Transformer Key-Value Cache Size with Cross-Layer Attention

Open in new window