LLM-Neo: Parameter Efficient Knowledge Distillation for Large Language Models