Achieving binary weight and activation for LLMs using Post-Training Quantization