NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
–Neural Information Processing Systems
Large Language Model (LLM) inference on Central Processing Units (CPU) is challenging due to the vast quantities of Multiply-Add (MAD) matrix operations in the attention computations.
Neural Information Processing Systems
Jun-1-2025, 20:23:13 GMT
- Country:
- North America > United States (0.28)
- Genre:
- Research Report > Experimental Study (0.93)
- Industry:
- Information Technology (0.46)
- Technology: