KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization

Open in new window