feature richness
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.05)
- (9 more...)
Distilling Object Detectors with Feature Richness
In recent years, large-scale deep models have achieved great success, but the huge computational complexity and massive storage requirements make it a great challenge to deploy them in resource-limited devices. As a model compression and acceleration method, knowledge distillation effectively improves the performance of small models by transferring the dark knowledge from the teacher detector. However, most of the existing distillation-based detection methods mainly imitating features near bounding boxes, which suffer from two limitations. First, they ignore the beneficial features outside the bounding boxes. Second, these methods imitate some features which are mistakenly regarded as the background by the teacher detector.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.05)
- (9 more...)
Distilling Object Detectors with Feature Richness
In recent years, large-scale deep models have achieved great success, but the huge computational complexity and massive storage requirements make it a great challenge to deploy them in resource-limited devices. As a model compression and acceleration method, knowledge distillation effectively improves the performance of small models by transferring the dark knowledge from the teacher detector. However, most of the existing distillation-based detection methods mainly imitating features near bounding boxes, which suffer from two limitations. First, they ignore the beneficial features outside the bounding boxes. Second, these methods imitate some features which are mistakenly regarded as the background by the teacher detector.
Improving Forward Compatibility in Class Incremental Learning by Increasing Representation Rank and Feature Richness
Kim, Jaeill, Lee, Wonseok, Eo, Moonjung, Rhee, Wonjong
Class Incremental Learning (CIL) constitutes a pivotal subfield within continual learning, aimed at enabling models to progressively learn new classification tasks while retaining knowledge obtained from prior tasks. Although previous studies have predominantly focused on backward compatible approaches to mitigate catastrophic forgetting, recent investigations have introduced forward compatible methods to enhance performance on novel tasks and complement existing backward compatible methods. In this study, we introduce an effective-Rank based Feature Richness enhancement (RFR) method, designed for improving forward compatibility. Specifically, this method increases the effective rank of representations during the base session, thereby facilitating the incorporation of more informative features pertinent to unseen novel tasks. Consequently, RFR achieves dual objectives in backward and forward compatibility: minimizing feature extractor modifications and enhancing novel task performance, respectively. To validate the efficacy of our approach, we establish a theoretical connection between effective rank and the Shannon entropy of representations. Subsequently, we conduct comprehensive experiments by integrating RFR into eleven well-known CIL methods. Our results demonstrate the effectiveness of our approach in enhancing novel-task performance while mitigating catastrophic forgetting. Furthermore, our method notably improves the average incremental accuracy across all eleven cases examined.
- Asia > South Korea > Seoul > Seoul (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Papers with Code - Papers with Code Newsletter #4
Welcome to the 4th issue of the Papers with Code newsletter. Self-attention continues to be adopted to build deep learning architectures that address computer vision problems like instance segmentation and object detection. One recent example is Vision Transformer (ViT) proposed by Dosovitskiy et al. Despite being promising for vision tasks, these large models can show computational inefficiencies and inferior performance (compared to established vision architectures). This leaves room for improvements.