A Causal Explainable Guardrails for Large Language Models