Xie, Jun
Feature Matching Intervention: Leveraging Observational Data for Causal Representation Learning
Li, Haoze, Xie, Jun
Causal representation learning [Schölkopf et al., 2021] aims to uncover causal features from observations of high-dimensional data, and is emerging as a prominent field at the intersection of deep learning and causal inference. Unlike traditional causal effect of a specific treatment variable, causal representation learning does not treat any observed variable as a potential causal parent. Instead, it focuses on transforming the observational space into a low-dimensional space to identify causal parents. However, despite its promise, recent years have witnessed notable shortcomings in effectively capturing causal features, particularly evident in tasks such as image classification. Numerous experiments over the past decade [Geirhos et al., 2020, Pezeshki et al., 2021, Beery et al., 2018, Nagarajan et al., 2020] have highlighted the failure of models to discern essential features, resulting in a phenomenon where models optimized on training data exhibit catastrophic performance when tested on unseen environments. This failure stems from the reliance of models on spurious features within the data, such as background color in images, rather than the genuine features essential for accurate classification, such as the inherent properties of objects depicted in the images. Consequently, models are susceptible to errors, particularly when faced with adversarial examples. The phenomenon described above is commonly known as out-of-distribution (OOD), with efforts to mitigate it termed as out-of-distribution generalization or domain generalization. To tackle this challenge, many approaches have been proposed.
Subconscious Robotic Imitation Learning
Xie, Jun, Wang, Zhicheng, Tan, Jianwei, Lin, Huanxu, Ma, Xiaoguang
Although robotic imitation learning (RIL) is promising for embodied intelligent robots, existing RIL approaches rely on computationally intensive multi-model trajectory predictions, resulting in slow execution and limited real-time responsiveness. Instead, human beings subconscious can constantly process and store vast amounts of information from their experiences, perceptions, and learning, allowing them to fulfill complex actions such as riding a bike, without consciously thinking about each. Inspired by this phenomenon in action neurology, we introduced subconscious robotic imitation learning (SRIL), wherein cognitive offloading was combined with historical action chunkings to reduce delays caused by model inferences, thereby accelerating task execution. This process was further enhanced by subconscious downsampling and pattern augmented learning policy wherein intent-rich information was addressed with quantized sampling techniques to improve manipulation efficiency. Experimental results demonstrated that execution speeds of the SRIL were 100\% to 200\% faster over SOTA policies for comprehensive dual-arm tasks, with consistently higher success rates.
Deliberation in Latent Space via Differentiable Cache Augmentation
Liu, Luyang, Pfeiffer, Jonas, Wu, Jiaxing, Xie, Jun, Szlam, Arthur
Techniques enabling large language models (LLMs) to "think more" by generating and attending to intermediate reasoning steps have shown promise in solving complex problems. However, the standard approaches generate sequences of discrete tokens immediately before responding, and so they can incur significant latency costs and be challenging to optimize. In this work, we demonstrate that a frozen LLM can be augmented with an offline coprocessor that operates on the model's key-value (kv) cache. This coprocessor augments the cache with a set of latent embeddings designed to improve the fidelity of subsequent decoding. We train this coprocessor using the language modeling loss from the decoder on standard pretraining data, while keeping the decoder itself frozen. This approach enables the model to learn, in an end-to-end differentiable fashion, how to distill additional computation into its kv-cache. Because the decoder remains unchanged, the coprocessor can operate offline and asynchronously, and the language model can function normally if the coprocessor is unavailable or if a given cache is deemed not to require extra computation. We show experimentally that when a cache is augmented, the decoder achieves lower perplexity on numerous subsequent tokens. Furthermore, even without any task-specific training, our experiments demonstrate that cache augmentation consistently reduces perplexity and improves performance across a range of reasoning-intensive tasks.
Large Language Model for Multi-Domain Translation: Benchmarking and Domain CoT Fine-tuning
Hu, Tianxiang, Zhang, Pei, Yang, Baosong, Xie, Jun, Wong, Derek F., Wang, Rui
Achieving consistent high-quality machine translation (MT) across diverse domains remains a significant challenge, primarily due to the limited and imbalanced parallel training data available in various domains. While large language models (LLMs) have demonstrated impressive general understanding and generation abilities, their potential in multi-domain MT is under-explored. We establish a comprehensive benchmark for multi-domain translation, featuring 25 German$\Leftrightarrow$English and 22 Chinese$\Leftrightarrow$English test sets respectively covering 15 domains. Our evaluation of prominent LLMs reveals a discernible performance gap against traditional MT systems, highlighting domain overfitting and catastrophic forgetting issues after fine-tuning on domain-limited corpora. To mitigate this, we propose a domain Chain of Thought (CoT) fine-tuning technique that utilizes the intrinsic multi-domain intelligence of LLMs to improve translation performance. This method inspires the LLM to perceive domain information from the source text, which then serves as a helpful hint to guide the translation process. Despite being trained on a small dataset of four domains, our CoT fine-tune approach achieves notable enhancements in translation accuracy and domain robustness than traditional fine-tuning, as evidenced by an average 1.53 BLEU score increase in over 20 German$\rightarrow$English distinct out-of-domain tests.
User-LLM: Efficient LLM Contextualization with User Embeddings
Ning, Lin, Liu, Luyang, Wu, Jiaxing, Wu, Neo, Berlowitz, Devora, Prakash, Sushant, Green, Bradley, O'Banion, Shawn, Xie, Jun
Large language models (LLMs) have revolutionized natural language processing. However, effectively incorporating complex and potentially noisy user interaction data remains a challenge. To address this, we propose User-LLM, a novel framework that leverages user embeddings to contextualize LLMs. These embeddings, distilled from diverse user interactions using self-supervised pretraining, capture latent user preferences and their evolution over time. We integrate these user embeddings with LLMs through cross-attention and soft-prompting, enabling LLMs to dynamically adapt to user context. Our comprehensive experiments on MovieLens, Amazon Review, and Google Local Review datasets demonstrate significant performance gains across various tasks. Notably, our approach outperforms text-prompt-based contextualization on long sequence tasks and tasks that require deep user understanding while being computationally efficient. We further incorporate Perceiver layers to streamline the integration between user encoders and LLMs, reducing computational demands.
A Day-to-Day Dynamical Approach to the Most Likely User Equilibrium Problem
Li, Jiayang, Wang, Qianni, Feng, Liyang, Xie, Jun, Nie, Yu Marco
The lack of a unique user equilibrium (UE) route flow in traffic assignment has posed a significant challenge to many transportation applications. The maximum-entropy principle, which advocates for the consistent selection of the most likely solution as a representative, is often used to address the challenge. Built on a recently proposed day-to-day (DTD) discrete-time dynamical model called cumulative logit (CULO), this study provides a new behavioral underpinning for the maximum-entropy UE (MEUE) route flow. It has been proven that CULO can reach a UE state without presuming travelers are perfectly rational. Here, we further establish that CULO always converges to the MEUE route flow if (i) travelers have zero prior information about routes and thus are forced to give all routes an equal choice probability, or (ii) all travelers gather information from the same source such that the so-called general proportionality condition is satisfied. Thus, CULO may be used as a practical solution algorithm for the MEUE problem. To put this idea into practice, we propose to eliminate the route enumeration requirement of the original CULO model through an iterative route discovery scheme. We also examine the discrete-time versions of four popular continuous-time dynamical models and compare them to CULO. The analysis shows that the replicator dynamic is the only one that has the potential to reach the MEUE solution with some regularity. The analytical results are confirmed through numerical experiments.
Improving Neural Machine Translation by Multi-Knowledge Integration with Prompting
Wang, Ke, Xie, Jun, Zhang, Yuqi, Zhao, Yu
Improving neural machine translation (NMT) systems with prompting has achieved significant progress in recent years. In this work, we focus on how to integrate multi-knowledge, multiple types of knowledge, into NMT models to enhance the performance with prompting. We propose a unified framework, which can integrate effectively multiple types of knowledge including sentences, terminologies/phrases and translation templates into NMT models. We utilize multiple types of knowledge as prefix-prompts of input for the encoder and decoder of NMT models to guide the translation process. The approach requires no changes to the model architecture and effectively adapts to domain-specific translation without retraining. The experiments on English-Chinese and English-German translation demonstrate that our approach significantly outperform strong baselines, achieving high translation quality and terminology match accuracy.
The 2nd Workshop on Maritime Computer Vision (MaCVi) 2024
Kiefer, Benjamin, Žust, Lojze, Kristan, Matej, Perš, Janez, Teršek, Matija, Wiliem, Arnold, Messmer, Martin, Yang, Cheng-Yen, Huang, Hsiang-Wei, Jiang, Zhongyu, Kuo, Heng-Cheng, Mei, Jie, Hwang, Jenq-Neng, Stadler, Daniel, Sommer, Lars, Huang, Kaer, Zheng, Aiguo, Chong, Weitu, Lertniphonphan, Kanokphan, Xie, Jun, Chen, Feng, Li, Jian, Wang, Zhepeng, Zedda, Luca, Loddo, Andrea, Di Ruberto, Cecilia, Vu, Tuan-Anh, Nguyen-Truong, Hai, Ha, Tan-Sang, Pham, Quan-Dung, Yeung, Sai-Kit, Feng, Yuan, Thien, Nguyen Thanh, Tian, Lixin, Kuan, Sheng-Yao, Ho, Yuan-Hao, Rodriguez, Angel Bueno, Carrillo-Perez, Borja, Klein, Alexander, Alex, Antje, Steiniger, Yannik, Sattler, Felix, Solano-Carrillo, Edgardo, Fabijanić, Matej, Šumunec, Magdalena, Kapetanović, Nadir, Michel, Andreas, Gross, Wolfgang, Weinmann, Martin
The 2nd Workshop on Maritime Computer Vision (MaCVi) 2024 addresses maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicles (USV). Three challenges categories are considered: (i) UAV-based Maritime Object Tracking with Re-identification, (ii) USV-based Maritime Obstacle Segmentation and Detection, (iii) USV-based Maritime Boat Tracking. The USV-based Maritime Obstacle Segmentation and Detection features three sub-challenges, including a new embedded challenge addressing efficicent inference on real-world embedded devices. This report offers a comprehensive overview of the findings from the challenges. We provide both statistical and qualitative analyses, evaluating trends from over 195 submissions. All datasets, evaluation code, and the leaderboard are available to the public at https://macvi.org/workshop/macvi24.
EMMA-X: An EM-like Multilingual Pre-training Algorithm for Cross-lingual Representation Learning
Guo, Ping, Wei, Xiangpeng, Hu, Yue, Yang, Baosong, Liu, Dayiheng, Huang, Fei, Xie, Jun
Expressing universal semantics common to all languages is helpful in understanding the meanings of complex and culture-specific sentences. The research theme underlying this scenario focuses on learning universal representations across languages with the usage of massive parallel corpora. However, due to the sparsity and scarcity of parallel data, there is still a big challenge in learning authentic ``universals'' for any two languages. In this paper, we propose EMMA-X: an EM-like Multilingual pre-training Algorithm, to learn (X)Cross-lingual universals with the aid of excessive multilingual non-parallel data. EMMA-X unifies the cross-lingual representation learning task and an extra semantic relation prediction task within an EM framework. Both the extra semantic classifier and the cross-lingual sentence encoder approximate the semantic relation of two sentences, and supervise each other until convergence. To evaluate EMMA-X, we conduct experiments on XRETE, a newly introduced benchmark containing 12 widely studied cross-lingual tasks that fully depend on sentence-level representations. Results reveal that EMMA-X achieves state-of-the-art performance. Further geometric analysis of the built representation space with three requirements demonstrates the superiority of EMMA-X over advanced models.
ReIDTrack: Multi-Object Track and Segmentation Without Motion
Huang, Kaer, Sun, Bingchuan, Chen, Feng, Zhang, Tao, Xie, Jun, Li, Jian, Twombly, Christopher Walter, Wang, Zhepeng
In recent years, dominant Multi-object tracking (MOT) and segmentation (MOTS) methods mainly follow the tracking-by-detection paradigm. Transformer-based end-to-end (E2E) solutions bring some ideas to MOT and MOTS, but they cannot achieve a new state-of-the-art (SOTA) performance in major MOT and MOTS benchmarks. Detection and association are two main modules of the tracking-by-detection paradigm. Association techniques mainly depend on the combination of motion and appearance information. As deep learning has been recently developed, the performance of the detection and appearance model is rapidly improved. These trends made us consider whether we can achieve SOTA based on only high-performance detection and appearance model. Our paper mainly focuses on exploring this direction based on CBNetV2 with Swin-B as a detection model and MoCo-v2 as a self-supervised appearance model. Motion information and IoU mapping were removed during the association. Our method wins 1st place on the MOTS track and wins 2nd on the MOT track in the CVPR2023 WAD workshop. We hope our simple and effective method can give some insights to the MOT and MOTS research community. Source code will be released under this git repository