Impossibility Results for Grammar-Compressed Linear Algebra
–Neural Information Processing Systems
To handle vast amounts of data, it is natural and popular to compress vectors and matrices. When we compress a vector from size N down to size n! N, it certainly makes it easier to store and transmit efficiently, but does it also make it easier to process? In this paper we consider lossless compression schemes, and ask if we can run our computations on the compressed data as efficiently as if the original data was that small. That is, if an operation has time complexity T pinput-sizeq, can we perform it on the compressed representation in time T pnq rather than T pNq? We consider the most basic linear algebra operations: inner product, matrix-vector multiplication, and matrix multiplication. In particular, given two compressed vectors, can we compute their inner product in time Opnq? Or perhaps we must decompress first and then multiply, spending ΩpNq time? The answer depends on the compression scheme.
Neural Information Processing Systems
Jan-25-2025, 02:19:20 GMT
- Country:
- North America
- Canada (0.28)
- United States (0.46)
- North America
- Genre:
- Research Report (0.46)
- Industry:
- Health & Medicine (0.46)