Eliminating stability hallucinations in llm-based tts models via attention guidance

Open in new window