Eliminating stability hallucinations in llm-based tts models via attention guidance