LLM-Neo: Parameter Efficient Knowledge Distillation for Large Language Models

Open in new window