Sun, Yiming
Scalable In-Context Learning on Tabular Data via Retrieval-Augmented Large Language Models
Wen, Xumeng, Zheng, Shun, Xu, Zhen, Sun, Yiming, Bian, Jiang
Recent studies have shown that large language models (LLMs), when customized with post-training on tabular data, can acquire general tabular in-context learning (TabICL) capabilities. These models are able to transfer effectively across diverse data schemas and different task domains. However, existing LLM-based TabICL approaches are constrained to few-shot scenarios due to the sequence length limitations of LLMs, as tabular instances represented in plain text consume substantial tokens. To address this limitation and enable scalable TabICL for any data size, we propose retrieval-augmented LLMs tailored to tabular data. Our approach incorporates a customized retrieval module, combined with retrieval-guided instruction-tuning for LLMs. This enables LLMs to effectively leverage larger datasets, achieving significantly improved performance across 69 widely recognized datasets and demonstrating promising scaling behavior. Extensive comparisons with state-of-the-art tabular models reveal that, while LLM-based TabICL still lags behind well-tuned numeric models in overall performance, it uncovers powerful algorithms under limited contexts, enhances ensemble diversity, and excels on specific datasets. These unique properties underscore the potential of language as a universal and accessible interface for scalable tabular data learning.
Integrating Object Detection Modality into Visual Language Model for Enhanced Autonomous Driving Agent
He, Linfeng, Sun, Yiming, Wu, Sihao, Liu, Jiaxu, Huang, Xiaowei
In this paper, we propose a novel framework for enhancing visual comprehension in autonomous driving systems by integrating visual language models (VLMs) with additional visual perception module specialised in object detection. We extend the Llama-Adapter architecture by incorporating a YOLOS-based detection network alongside the CLIP perception network, addressing limitations in object detection and localisation. Our approach introduces camera ID-separators to improve multi-view processing, crucial for comprehensive environmental awareness. Experiments on the DriveLM visual question answering challenge demonstrate significant improvements over baseline models, with enhanced performance in ChatGPT scores, BLEU scores, and CIDEr metrics, indicating closeness of model answer to ground truth. Our method represents a promising step towards more capable and interpretable autonomous driving systems. Possible safety enhancement enabled by detection modality is also discussed.
Learning Multimodal Cues of Children's Uncertainty
Cheng, Qi, ฤฐnan, Mert, Mbarki, Rahma, Grmek, Grace, Choi, Theresa, Sun, Yiming, Persaud, Kimele, Wang, Jenny, Alikhani, Malihe
Understanding uncertainty plays a critical role in achieving common ground (Clark et al.,1983). This is especially important for multimodal AI systems that collaborate with users to solve a problem or guide the user through a challenging concept. In this work, for the first time, we present a dataset annotated in collaboration with developmental and cognitive psychologists for the purpose of studying nonverbal cues of uncertainty. We then present an analysis of the data, studying different roles of uncertainty and its relationship with task difficulty and performance. Lastly, we present a multimodal machine learning model that can predict uncertainty given a real-time video clip of a participant, which we find improves upon a baseline multimodal transformer model. This work informs research on cognitive coordination between human-human and human-AI and has broad implications for gesture understanding and generation. The anonymized version of our data and code will be publicly available upon the completion of the required consent forms and data sheets.
Transfer Learning with Clinical Concept Embeddings from Large Language Models
Gao, Yuhe, Bao, Runxue, Ji, Yuelyu, Sun, Yiming, Song, Chenxi, Ferraro, Jeffrey P., Ye, Ye
Knowledge sharing is crucial in healthcare, especially when leveraging data from multiple clinical sites to address data scarcity, reduce costs, and enable timely interventions. Transfer learning can facilitate cross-site knowledge transfer, but a major challenge is heterogeneity in clinical concepts across different sites. Large Language Models (LLMs) show significant potential of capturing the semantic meaning of clinical concepts and reducing heterogeneity. This study analyzed electronic health records from two large healthcare systems to assess the impact of semantic embeddings from LLMs on local, shared, and transfer learning models. Results indicate that domain-specific LLMs, such as Med-BERT, consistently outperform in local and direct transfer scenarios, while generic models like OpenAI embeddings require fine-tuning for optimal performance. However, excessive tuning of models with biomedical embeddings may reduce effectiveness, emphasizing the need for balance. This study highlights the importance of domain-specific embeddings and careful model tuning for effective knowledge transfer in healthcare.
Free-form Grid Structure Form Finding based on Machine Learning and Multi-objective Optimisation
Meng, Yiping, Sun, Yiming
Free-form structural forms are widely used to design spatial structures for their irregular spatial morphology. Current free-form form-finding methods cannot adequately meet the material properties, structural requirements or construction conditions, which brings the deviation between the initial 3D geometric design model and the constructed free-form structure. Thus, the main focus of this paper is to improve the rationality of free-form morphology considering multiple objectives in line with the characteristics and constraints of material. In this paper, glued laminated timber is selected as a case. Firstly, machine learning is adopted based on the predictive capability. By selecting a free-form timber grid structure and following the principles of NURBS, the free-form structure is simplified into free-form curves. The transformer is selected to train and predict the curvatures of the curves considering the material characteristics. After predicting the curvatures, the curves are transformed into vectors consisting of control points, weights, and knot vectors. To ensure the constructability and robustness of the structure, minimising the mass of the structure, stress and strain energy are the optimisation objectives. Two parameters (weight and the z-coordinate of the control points) of the free-from morphology are extracted as the variables of the free-form morphology to conduct the optimisation. The evaluation algorithm was selected as the optimal tool due to its capability to optimise multiple parameters. While optimising the two variables, the mechanical performance evaluation indexes such as the maximum displacement in the z-direction are demonstrated in the 60th step. The optimisation results for structure mass, stress and strain energy after 60 steps show the tendency of oscillation convergence, which indicates the efficiency of the proposal multi-objective optimisation.
One-shot Active Learning Based on Lewis Weight Sampling for Multiple Deep Models
Huang, Sheng-Jun, Li, Yi, Sun, Yiming, Tang, Ying-Peng
Active learning (AL) for multiple target models aims to reduce labeled data querying while effectively training multiple models concurrently. Existing AL algorithms often rely on iterative model training, which can be computationally expensive, particularly for deep models. In this paper, we propose a one-shot AL method to address this challenge, which performs all label queries without repeated model training. Specifically, we extract different representations of the same dataset using distinct network backbones, and actively learn the linear prediction layer on each representation via an $\ell_p$-regression formulation. The regression problems are solved approximately by sampling and reweighting the unlabeled instances based on their maximum Lewis weights across the representations. An upper bound on the number of samples needed is provided with a rigorous analysis for $p\in [1, +\infty)$. Experimental results on 11 benchmarks show that our one-shot approach achieves competitive performances with the state-of-the-art AL methods for multiple target models.
Multi: Multimodal Understanding Leaderboard with Text and Images
Zhu, Zichen, Xu, Yang, Chen, Lu, Yang, Jingkai, Ma, Yichuan, Sun, Yiming, Wen, Hailin, Liu, Jiaqi, Cai, Jinyu, Ma, Yingzi, Zhang, Situo, Zhao, Zihan, Sun, Liangtai, Yu, Kai
Rapid progress in multimodal large language models (MLLMs) highlights the need to introduce challenging yet realistic benchmarks to the academic community. Existing benchmarks primarily focus on simple natural image understanding, but Multi emerges as a cutting-edge benchmark for MLLMs, offering a comprehensive dataset for evaluating MLLMs against understanding complex figures and tables, and scientific questions. This benchmark, reflecting current realistic examination styles, provides multimodal inputs and requires responses that are either precise or open-ended, similar to real-life school tests. It challenges MLLMs with a variety of tasks, ranging from formula derivation to image detail analysis, and cross-modality reasoning. Multi includes over 18,000 questions, with a focus on science-based QA in diverse formats. We also introduce Multi-Elite, a 500-question subset for testing the extremities of MLLMs, and Multi-Extend, which enhances In-Context Learning research with more than 4,500 knowledge pieces. Our evaluation indicates significant potential for MLLM advancement, with GPT-4V achieving a 63.7% accuracy rate on Multi, in contrast to other MLLMs scoring between 31.3% and 53.7%. Multi serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
Online Transfer Learning for RSV Case Detection
Sun, Yiming, Gao, Yuhe, Bao, Runxue, Cooper, Gregory F., Espino, Jessi, Hochheiser, Harry, Michaels, Marian G., Aronis, John M., Ye, Ye
In such cases, transferring knowledge from the source domain becomes crucial, particularly because the Machine learning has made substantial advancements in limited initial data in the target domain may be insufficient recent decades, with its applications spanning a wide range of for effective learning. The extensive and diverse information fields such as image and speech recognition, natural language available from the source domains can significantly compensate processing, and autonomous driving. Despite these achievements, for this shortfall, providing a foundational knowledge base machine learning in biomedicine faces significant challenges, that the model can build upon as more target domain data particularly in data collection. The acquisition of labeled becomes available. Therefore, the efficiency and effectiveness data can be very costly or even unfeasible due to factors of learning in the target domain are greatly enhanced by the like ethical considerations, patient privacy, and the scarcity transferred knowledge from the source domains. of certain diseases. These challenges have led researchers to Online transfer learning entails leveraging knowledge from increasingly rely on utilizing data from related domains that a static source domain and applying it to an ongoing, evolving have a more abundant supply of data.
A Survey of Heterogeneous Transfer Learning
Bao, Runxue, Sun, Yiming, Gao, Yuhe, Wang, Jindong, Yang, Qiang, Chen, Haifeng, Mao, Zhi-Hong, Ye, Ye
The application of transfer learning, an approach utilizing knowledge from a source domain to enhance model performance in a target domain, has seen a tremendous rise in recent years, underpinning many real-world scenarios. The key to its success lies in the shared common knowledge between the domains, a prerequisite in most transfer learning methodologies. These methods typically presuppose identical feature spaces and label spaces in both domains, known as homogeneous transfer learning, which, however, is not always a practical assumption. Oftentimes, the source and target domains vary in feature spaces, data distributions, and label spaces, making it challenging or costly to secure source domain data with identical feature and label spaces as the target domain. Arbitrary elimination of these differences is not always feasible or optimal. Thus, heterogeneous transfer learning, acknowledging and dealing with such disparities, has emerged as a promising approach for a variety of tasks. Despite the existence of a survey in 2017 on this topic, the fast-paced advances post-2017 necessitate an updated, in-depth review. We therefore present a comprehensive survey of recent developments in heterogeneous transfer learning methods, offering a systematic guide for future research. Our paper reviews methodologies for diverse learning scenarios, discusses the limitations of current studies, and covers various application contexts, including Natural Language Processing, Computer Vision, Multimodality, and Biomedicine, to foster a deeper understanding and spur future research.
Prediction of COVID-19 Patients' Emergency Room Revisit using Multi-Source Transfer Learning
Ji, Yuelyu, Gao, Yuhe, Bao, Runxue, Li, Qi, Liu, Disheng, Sun, Yiming, Ye, Ye
The coronavirus disease 2019 (COVID-19) has led to a global pandemic of significant severity. In addition to its high level of contagiousness, COVID-19 can have a heterogeneous clinical course, ranging from asymptomatic carriers to severe and potentially life-threatening health complications. Many patients have to revisit the emergency room (ER) within a short time after discharge, which significantly increases the workload for medical staff. Early identification of such patients is crucial for helping physicians focus on treating life-threatening cases. In this study, we obtained Electronic Health Records (EHRs) of 3,210 encounters from 13 affiliated ERs within the University of Pittsburgh Medical Center between March 2020 and January 2021. We leveraged a Natural Language Processing technique, ScispaCy, to extract clinical concepts and used the 1001 most frequent concepts to develop 7-day revisit models for COVID-19 patients in ERs. The research data we collected from 13 ERs may have distributional differences that could affect the model development. To address this issue, we employed a classic deep transfer learning method called the Domain Adversarial Neural Network (DANN) and evaluated different modeling strategies, including the Multi-DANN algorithm, the Single-DANN algorithm, and three baseline methods. Results showed that the Multi-DANN models outperformed the Single-DANN models and baseline models in predicting revisits of COVID-19 patients to the ER within 7 days after discharge. Notably, the Multi-DANN strategy effectively addressed the heterogeneity among multiple source domains and improved the adaptation of source data to the target domain. Moreover, the high performance of Multi-DANN models indicates that EHRs are informative for developing a prediction model to identify COVID-19 patients who are very likely to revisit an ER within 7 days after discharge.