๐๐ Go Big First, Then Compress
Conventional wisdom in machine learning (ML) tells us that bigger models are better. In the current state of the ML ecosystem dominated by supervised learning models, the mantra is to go big. Bigger deep learning models tend to outperform smaller versions in most deep learning scenarios. However, bigger models are also slow, expensive to run and really difficult to operate. Model compression is one of the techniques that helps address those limitations.
Mar-28-2021, 14:35:19 GMT