Pre-training Distillation for Large Language Models: A Design Space Exploration

Open in new window