Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers

Open in new window