Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection