ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models

Open in new window