Goto

Collaborating Authors

 Han, Rui


Let Curves Speak: A Continuous Glucose Monitor based Large Sensor Foundation Model for Diabetes Management

arXiv.org Artificial Intelligence

While previous studies of AI in diabetes management focus on long-term risk, research on near-future glucose prediction remains limited but important as it enables timely diabetes self-management. Integrating AI with continuous glucose monitoring (CGM) holds promise for near-future glucose prediction. However, existing models have limitations in capturing patterns of blood glucose fluctuations and demonstrate poor generalizability. A robust approach is needed to leverage massive CGM data for near-future glucose prediction. We propose large sensor models (LSMs) to capture knowledge in CGM data by modeling patients as sequences of glucose. CGM-LSM is pretrained on 15.96 million glucose records from 592 diabetes patients for near-future glucose prediction. We evaluated CGM-LSM against state-of-the-art methods using the OhioT1DM dataset across various metrics, prediction horizons, and unseen patients. Additionally, we assessed its generalizability across factors like diabetes type, age, gender, and hour of day. CGM-LSM achieved exceptional performance, with an rMSE of 29.81 mg/dL for type 1 diabetes patients and 23.49 mg/dL for type 2 diabetes patients in a two-hour prediction horizon. For the OhioT1DM dataset, CGM-LSM achieved a one-hour rMSE of 15.64 mg/dL, halving the previous best of 31.97 mg/dL. Robustness analyses revealed consistent performance not only for unseen patients and future periods, but also across diabetes type, age, and gender. The model demonstrated adaptability to different hours of day, maintaining accuracy across periods of various activity intensity levels. CGM-LSM represents a transformative step in diabetes management by leveraging pretraining to uncover latent glucose generation patterns in sensor data. Our findings also underscore the broader potential of LSMs to drive innovation across domains involving complex sensor data.


Hierarchical Memory Pool Based Edge Semi-Supervised Continual Learning Method

arXiv.org Artificial Intelligence

The continuous changes in the world have resulted in the performance regression of neural networks. Therefore, continual learning (CL) area gradually attracts the attention of more researchers. For edge intelligence, the CL model not only needs to overcome catastrophic for-getting, but also needs to face the huge challenge of severely limited resources: the lack of labeled resources and powerful devices. However, the existing classic CL methods usually rely on a large number of labeled samples to maintain the plasticity and stability, and the semi-supervised learning methods often need to pay a large computational and memory overhead for higher accuracy. In response to these prob-lems, a low-cost semi-supervised CL method named Edge Hierarchical Memory Learner (EdgeHML) will be proposed. EdgeHML can effec-tively utilize a large number of unlabeled samples and a small number of labeled samples. It is based on a hierarchical memory pool, lever-age multi-level storage structure to store and replay samples. EdgeHML implements the interaction between different levels through a combination of online and offline strategies. In addition, in order to further reduce the computational overhead for unlabeled samples, EdgeHML leverages a progressive learning method. It reduces the computation cycles of unlabeled samples by controlling the learning process. The experimental results show that on three semi-supervised CL tasks, EdgeHML can improve the model accuracy by up to 16.35% compared with the classic CL method, and the training iterations time can be reduced by more than 50% compared with semi-supervised methods. EdgeHML achieves a semi-supervised CL process with high performance and low overhead for edge intelligence.


FedKNOW: Federated Continual Learning with Signature Task Knowledge Integration at Edge

arXiv.org Artificial Intelligence

Deep Neural Networks (DNNs) have been ubiquitously adopted in internet of things and are becoming an integral of our daily life. When tackling the evolving learning tasks in real world, such as classifying different types of objects, DNNs face the challenge to continually retrain themselves according to the tasks on different edge devices. Federated continual learning is a promising technique that offers partial solutions but yet to overcome the following difficulties: the significant accuracy loss due to the limited on-device processing, the negative knowledge transfer caused by the limited communication of non-IID data, and the limited scalability on the tasks and edge devices. In this paper, we propose FedKNOW, an accurate and scalable federated continual learning framework, via a novel concept of signature task knowledge. FedKNOW is a client side solution that continuously extracts and integrates the knowledge of signature tasks which are highly influenced by the current task. Each client of FedKNOW is composed of a knowledge extractor, a gradient restorer and, most importantly, a gradient integrator. Upon training for a new task, the gradient integrator ensures the prevention of catastrophic forgetting and mitigation of negative knowledge transfer by effectively combining signature tasks identified from the past local tasks and other clients' current tasks through the global model. We implement FedKNOW in PyTorch and extensively evaluate it against state-of-the-art techniques using popular federated continual learning benchmarks. Extensive evaluation results on heterogeneous edge devices show that FedKNOW improves model accuracy by 63.24% without increasing model training time, reduces communication cost by 34.28%, and achieves more improvements under difficult scenarios such as large numbers of tasks or clients, and training different complex networks.


ExpertNet: Adversarial Learning and Recovery Against Noisy Labels

arXiv.org Machine Learning

Today's available datasets in the wild, e.g., from social media and open platforms, present tremendous opportunities and challenges for deep learning, as there is a significant portion of tagged images, but often with noisy, i.e. erroneous, labels. Recent studies improve the robustness of deep models against noisy labels without the knowledge of true labels. In this paper, we advocate to derive a stronger classifier which proactively makes use of the noisy labels in addition to the original images - turning noisy labels into learning features. To such an end, we propose a novel framework, ExpertNet, composed of Amateur and Expert, which iteratively learn from each other. Amateur is a regular image classifier trained by the feedback of Expert, which imitates how human experts would correct the predicted labels from Amateur using the noise pattern learnt from the knowledge of both the noisy and ground truth labels. The trained Amateur and Expert proactively leverage the images and their noisy labels to infer image classes. Our empirical evaluations on noisy versions of CIFAR-10, CIFAR-100 and real-world data of Clothing1M show that the proposed model can achieve robust classification against a wide range of noise ratios and with as little as 20-50% training data, compared to state-of-the-art deep models that solely focus on distilling the impact of noisy labels.


RAD: On-line Anomaly Detection for Highly Unreliable Data

arXiv.org Machine Learning

--Classification algorithms have been widely adopted to detect anomalies for various systems, e.g., IoT, cloud and face recognition, under the common assumption that the data source is clean, i.e., features and labels are correctly set. However, data collected from the wild can be unreliable due to careless annotations or malicious data transformation for incorrect anomaly detection. In this paper, we present a two-layer online learning framework for robust anomaly detection (RAD) in the presence of unreliable anomaly labels, where the first layer is to filter out the suspicious data, and the second layer detects the anomaly patterns from the remaining data. T o adapt to the online nature of anomaly detection, we extend RAD with additional features of repetitively cleaning, conflicting opinions of classifiers, and oracle knowledge. We online learn from the incoming data streams and continuously cleanse the data, so as to adapt to the increasing learning capacity from the larger accumulated data set. Moreover, we explore the concept of oracle learning that provides additional information of true labels for difficult data points. We specifically focus on three use cases, (i) detecting 10 classes of IoT attacks, (ii) predicting 4 classes of task failures of big data jobs, (iii) recognising 20 celebrities faces. Our evaluation results show that RAD can robustly improve the accuracy of anomaly detection, to reach up to 98% for IoT device attacks (i.e., 11%), up to 84% for cloud task failures (i.e., 20%) under 40% noise, and up to 74% for face recognition (i.e., 28%) under 30% noisy labels. The proposed RAD is general and can be applied to different anomaly detection algorithms. Anomaly detection is one of the core operations for enforcing dependability and performance in modern distributed systems [29], [44]. Anomalies can take various forms including erroneous data produced by a corrupted IoT device or the failure of a job executed in a datacenter [6], [7], [47]. Dealing with this issue has often been done in recent art by relying on machine learning-based classification algorithms over system logs [11], [13] or backend collected data [17], [46]. This work has been partly supported by the IRS (Initialtive de Recherche Strat egique) program DA TE. This work has been partly funded by the Swiss National Science Foundation NRP75 project 407540 167266 and TU Delft technology fellowship. As workloads at real systems are highly dynamic over time, it is even more challenging to predict anomalies that can not be easily distinguished from the system dynamics, compared to the systems with static workloads. In this context, a rising concern when applying classification algorithms is the accessibility to a reliable ground truth for anomalies [9].