Clear Minds Think Alike: What Makes LLM Fine-tuning Robust? A Study of Token Perplexity

Open in new window