Efficient Deep Learning Board: Training Feedback Is Not All You Need

Gong, Lina, Gao, Qi, Li, Peng, Wei, Mingqiang, Wu, Fei

arXiv.org Artificial Intelligence 

Abstract--Current automatic deep learning (i.e., AutoDL) frameworks rely on training feedback from actual runs, which often hinder their ability to provide quick and clear performance predictions for selecting suitable DL systems. To address this issue, we propose EfficientDL, an innovative deep learning board designed for automatic performance prediction and component recommendation. EfficientDL can quickly and precisely recommend twenty-seven system components and predict the performance of DL models without requiring any training feedback. The magic of no training feedback comes from our proposed comprehensive, multi-dimensional, finegrained system component dataset, which enables us to develop a static performance prediction model and comprehensive optimized component recommendation algorithm (i.e., αβ -BO search), removing the dependency on actually running parameterized models during the traditional optimization search process. The simplicity and power of EfficientDL stem from its compatibility with most DL models. For example, EfficientDL operates seamlessly with mainstream models such as ResNet50, MobileNetV3, EfficientNet-B0, MaxViT-T, Swin-B, and DaViT-T, bringing competitive performance improvements. Besides, experimental results on the CIFAR-10 dataset reveal that EfficientDL outperforms existing AutoML tools in both accuracy and efficiency (approximately 20 times faster along with 1.31% Top-1 accuracy improvement than the cutting-edge methods). Researchers and practitioners often face the challenge of investing substantial time and computational resources to manually select suitable model architectures, tune hyperparameters (such as learning rate, batch size, and number of epochs), and augment data to align with the specific characteristics of their datasets. As a result, achieving optimal performance with deep learning models can be particularly daunting for beginners.