Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers

Open in new window