Shortened LLaMA: A Simple Depth Pruning for Large Language Models

Open in new window