rcl
- Workflow (0.68)
- Research Report > Promising Solution (0.34)
A Illustration of RCL
We illustrate the online optimization process of RCL in Figure 1. We set b = 10 and A = I for the cost function in Eqn. The testing process is almost instant and takes less than 1 second. It does not use robustification during online optimization. By Theorem 4.1, there is a trade-off (governed by ML predictions for those problem instances that are adversarial to ROBD.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands (0.04)
- Transportation > Ground > Road (0.69)
- Transportation > Electric Vehicle (0.69)
- Automobiles & Trucks (0.69)
- Energy (0.68)
- North America > United States > California > Riverside County > Riverside (0.14)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands (0.04)
- Transportation > Ground > Road (0.94)
- Transportation > Electric Vehicle (0.94)
- Automobiles & Trucks (0.94)
- Energy (0.93)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands (0.04)
- Transportation > Ground > Road (0.69)
- Transportation > Electric Vehicle (0.69)
- Automobiles & Trucks (0.69)
- Energy (0.68)
- North America > United States > California > Riverside County > Riverside (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- (2 more...)
- Transportation > Ground > Road (0.94)
- Transportation > Electric Vehicle (0.94)
- Automobiles & Trucks (0.94)
- Energy (0.93)
Robust Learning for Smoothed Online Convex Optimization with Feedback Delay
We study a general form of Smoothed Online Convex Optimization, a.k.a. We propose a novel machine learning (ML) augmented online algorithm, Robustness-Constrained Learning (RCL), which combines untrusted ML predictions with a trusted expert online algorithm via constrained projection to robustify the ML prediction. Specifically, we prove that RCL is able to guarantee (1 \lambda) -competitiveness against any given expert for any \lambda 0, while also explicitly training the ML model in a robustification-aware manner to improve the average-case performance. Importantly, RCL is the first ML-augmented algorithm with a provable robustness guarantee in the case of multi-step switching cost and feedback delay. We demonstrate the improvement of RCL in both robustness and average performance using battery management as a case study.
Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning
Chen, Dongjie, Patwari, Kartik, Lai, Zhengfeng, Cheung, Sen-ching, Chuah, Chen-Nee
Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained source model to a target domain using only unlabeled target data. Current SFDA methods face challenges in effectively leveraging pre-trained knowledge and exploiting target domain data. Multimodal Large Language Models (MLLMs) offer remarkable capabilities in understanding visual and textual information, but their applicability to SFDA poses challenges such as instruction-following failures, intensive computational demands, and difficulties in performance measurement prior to adaptation. To alleviate these issues, we propose Reliability-based Curriculum Learning (RCL), a novel framework that integrates multiple MLLMs for knowledge exploitation via pseudo-labeling in SFDA. Our framework incorporates proposed Reliable Knowledge Transfer, Self-correcting and MLLM-guided Knowledge Expansion, and Multi-hot Masking Refinement to progressively exploit unlabeled data in the target domain. RCL achieves state-of-the-art (SOTA) performance on multiple SFDA benchmarks, e.g., +9.4% on DomainNet, demonstrating its effectiveness in enhancing adaptability and robustness without requiring access to source data.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Kentucky (0.04)
- North America > United States > California > Yolo County > Davis (0.04)
- Europe > Greece (0.04)
Convolutional Neural Networks with Intra-layer Recurrent Connections for Scene Labeling
Scene labeling is a challenging computer vision task. It requires the use of both local discriminative features and global context information. We adopt a deep recurrent convolutional neural network (RCNN) for this task, which is originally proposed for object recognition. Different from traditional convolutional neural networks (CNN), this model has intra-layer recurrent connections in the convolutional layers. Therefore each convolutional layer becomes a two-dimensional recurrent neural network.