Lin, Wan-Yi
Leveraging Foundation Models to Improve Lightweight Clients in Federated Learning
Wu, Xidong, Lin, Wan-Yi, Willmott, Devin, Condessa, Filipe, Huang, Yufei, Li, Zhenzhen, Ganesh, Madan Ravi
Federated Learning (FL) is a distributed training paradigm that enables clients scattered across the world to cooperatively learn a global model without divulging confidential data. However, FL faces a significant challenge in the form of heterogeneous data distributions among clients, which leads to a reduction in performance and robustness. A recent approach to mitigating the impact of heterogeneous data distributions is through the use of foundation models, which offer better performance at the cost of larger computational overheads and slower inference speeds. We introduce foundation model distillation to assist in the federated training of lightweight client models and increase their performance under heterogeneous data settings while keeping inference costs low. Our results show improvement in the global model performance on a balanced testing set, which contains rarely observed samples, even under extreme non-IID client data distributions. We conduct a thorough evaluation of our framework with different foundation model backbones on CIFAR10, with varying degrees of heterogeneous data distributions ranging from class-specific data partitions across clients to dirichlet data sampling, parameterized by values between 0.01 and 1.0.
Text-driven Prompt Generation for Vision-Language Models in Federated Learning
Qiu, Chen, Li, Xingyu, Mummadi, Chaithanya Kumar, Ganesh, Madan Ravi, Li, Zhenzhen, Peng, Lu, Lin, Wan-Yi
Prompt learning for vision-language models, e.g., CoOp, has shown great success in adapting CLIP to different downstream tasks, making it a promising solution for federated learning due to computational reasons. Existing prompt learning techniques replace hand-crafted text prompts with learned vectors that offer improvements on seen classes, but struggle to generalize to unseen classes. Our work addresses this challenge by proposing Federated Text-driven Prompt Generation (FedTPG), which learns a unified prompt generation network across multiple remote clients in a scalable manner. The prompt generation network is conditioned on task-related text input, thus is context-aware, making it suitable to generalize for both seen and unseen classes. Our comprehensive empirical evaluations on nine diverse image classification datasets show that our method is superior to existing federated prompt learning methods, that achieve overall better generalization on both seen and unseen classes and is also generalizable to unseen datasets. Vision-language models have recently emerged as a transformative technology for machine learning applications. Seminal contributions like Contrastive Language-Image Pretraining (CLIP) Radford et al. (2021) have demonstrated unprecedented capabilities in diverse image classification tasks. Different classification methods often leverage manually-engineered text prompts, such as "a photo of a [class]," to utilize CLIP's rich semantic features (Jia et al., 2021). CLIP has shown its robustness and versatility in handling a wide range of image distributions.
Accelerating Road Sign Ground Truth Construction with Knowledge Graph and Machine Learning
Kim, Ji Eun, Henson, Cory, Huang, Kevin, Tran, Tuan A., Lin, Wan-Yi
Having a comprehensive, high-quality dataset of road sign annotation is critical to the success of AI-based Road Sign Recognition (RSR) systems. In practice, annotators often face difficulties in learning road sign systems of different countries; hence, the tasks are often time-consuming and produce poor results. We propose a novel approach using knowledge graphs and a machine learning algorithm - variational prototyping-encoder (VPE) - to assist human annotators in classifying road signs effectively. Annotators can query the Road Sign Knowledge Graph using visual attributes and receive closest matching candidates suggested by the VPE model. The VPE model uses the candidates from the knowledge graph and a real sign image patch as inputs. We show that our knowledge graph approach can reduce sign search space by 98.9%. Furthermore, with VPE, our system can propose the correct single candidate for 75% of signs in the tested datasets, eliminating the human search effort entirely in those cases.
Learning in Confusion: Batch Active Learning with Noisy Oracle
Gupta, Gaurav, Sahu, Anit Kumar, Lin, Wan-Yi
We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles. We specifically consider the setting of batch active learning, in which multiple samples are selected as opposed to a single sample as in classical settings so as to reduce the training overhead. Our approach bridges between uniform randomness and score based importance sampling of clusters when selecting a batch of new samples. Experiments on benchmark image classification datasets (MNIST, SVHN, and CIFAR10) shows improvement over existing active learning strategies. We introduce an extra denoising layer to deep networks to make active learning robust to label noises and show significant improvements.
DeepBbox: Accelerating Precise Ground Truth Generation for Autonomous Driving Datasets
Rathore, Govind, Lin, Wan-Yi, Kim, Ji Eun
DeepBbox: Accelerating Precise Ground Truth Generation for Autonomous Driving Datasets Govind Rathore, Wan-Yi Lin and Ji Eun Kim Abstract -- Autonomous driving requires various computer vision algorithms, such as object detection and tracking. Precisely-labeled datasets (i.e., objects are fully contained in bounding boxes with only a few extra pixels) are preferred for training such algorithms, so that the algorithms can detect exact locations of the objects. However, it is very time-consuming and hence expensive to generate precise labels for image sequences at scale. In this paper, we propose DeepBbox, an algorithm that "corrects" loose object labels into right bounding boxes to reduce human annotation efforts. We use Cityscapes [1] dataset to show annotation efficiency and accuracy improvement using DeepBbox. Experimental results show that, with DeepBbox, we can increase the number of object edges that are labeled automatically (within 1% error) by 50% to reduce manual annotation time.