Learning Without Forgetting Simplified
Deep learning has recently become a dominant approach in computer vision tasks thanks to convolutional neural networks (CNNs). A CNN has to be trained well before being deployed to real-world applications, yet unfortunately, sufficient training data is not always available. In this sense, transfer learning is invented to take advantage of the knowledge of a pre-trained model which is trained on a sufficient database to solve other relevant problems. However, transfer learning commonly does not consider the performance of the model on the previous tasks, in other words, CNNs may forget what they had learned before when the knowledge now is transferred to another task. For instance, a pre-trained CNN classifier which had been trained to classify vehicle types is utilized to perform car genre classification using transfer learning, the fact is that the model now can work well on recognizing car genres, yet it underperforms in vehicle types classification unlike what it used to do. That example shows the biggest shortcoming of transfer learning.
Nov-11-2021, 12:30:22 GMT
- Technology: