When Thinking LLMs Lie: Unveiling the Strategic Deception in Representations of Reasoning Models