TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning
–Neural Information Processing Systems
Efficient on-device learning requires a small memory footprint at training time to fit the tight memory constraint. Existing work solves this problem by reducing the number of trainable parameters. However, this doesn't directly translate to memory saving since the major bottleneck is the activations, not parameters.
Neural Information Processing Systems
Dec-24-2025, 05:53:04 GMT
- Technology: