Limits of KV Cache Compression for Tensor Attention based Autoregressive Transformers