Accelerating LLM Inference via Dynamic KV Cache Placement in Heterogeneous Memory System