NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
–Neural Information Processing Systems
Large Language Model (LLM) inference on Central Processing Units (CPU) is challenging due to the vast quantities of Multiply-Add (MAD) matrix operations in the attention computations.
Neural Information Processing Systems
Feb-7-2026, 03:45:13 GMT
- Technology: