Not enough data to create a plot.
Try a different view from the menu above.
Chen, Zhuo
UniHR: Hierarchical Representation Learning for Unified Knowledge Graph Link Prediction
Liu, Zhiqiang, Chen, Mingyang, Hua, Yin, Chen, Zhuo, Liu, Ziqi, Liang, Lei, Chen, Huajun, Zhang, Wen
Beyond-triple fact representations including hyper-relational facts with auxiliary key-value pairs, temporal facts with additional timestamps, and nested facts implying relationships between facts, are gaining significant attention. However, existing link prediction models are usually designed for one specific type of facts, making it difficult to generalize to other fact representations. To overcome this limitation, we propose a Unified Hierarchical Representation learning framework (UniHR) for unified knowledge graph link prediction. It consists of a unified Hierarchical Data Representation (HiDR) module and a unified Hierarchical Structure Learning (HiSL) module as graph encoder. The HiDR module unifies hyper-relational KGs, temporal KGs, and nested factual KGs into triple-based representations. Then HiSL incorporates intra-fact and inter-fact message passing, focusing on enhancing the semantic information within individual facts and enriching the structural information between facts. Experimental results across 7 datasets from 3 types of KGs demonstrate that our UniHR outperforms baselines designed for one specific kind of KG, indicating strong generalization capability of HiDR form and the effectiveness of HiSL module. Code and data are available at https://github.com/Lza12a/UniHR.
Multimodal Latent Diffusion Model for Complex Sewing Pattern Generation
Liu, Shengqi, Cheng, Yuhao, Chen, Zhuo, Ren, Xingyu, Zhu, Wenhan, Li, Lincheng, Bi, Mengxiao, Yang, Xiaokang, Yan, Yichao
Generating sewing patterns in garment design is receiving increasing attention due to its CG-friendly and flexible-editing nature. Previous sewing pattern generation methods have been able to produce exquisite clothing, but struggle to design complex garments with detailed control. To address these issues, we propose SewingLDM, a multi-modal generative model that generates sewing patterns controlled by text prompts, body shapes, and garment sketches. Initially, we extend the original vector of sewing patterns into a more comprehensive representation to cover more intricate details and then compress them into a compact latent space. To learn the sewing pattern distribution in the latent space, we design a two-step training strategy to inject the multi-modal conditions, \ie, body shapes, text prompts, and garment sketches, into a diffusion model, ensuring the generated garments are body-suited and detail-controlled. Comprehensive qualitative and quantitative experiments show the effectiveness of our proposed method, significantly surpassing previous approaches in terms of complex garment design and various body adaptability. Our project page: https://shengqiliu1.github.io/SewingLDM.
Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
Yang, Xianghui, Shi, Huiwen, Zhang, Bowen, Yang, Fan, Wang, Jiacheng, Zhao, Hongxu, Liu, Xinhai, Wang, Xinzhou, Lin, Qingxiang, Yu, Jiaao, Wang, Lifu, Chen, Zhuo, Liu, Sicong, Liu, Yuhong, Yang, Yong, Wang, Di, Jiang, Jie, Guo, Chunchao
While 3D generative models have greatly improved artists' workflows, the existing diffusion models for 3D generation suffer from slow generation and poor generalization. To address this issue, we propose a two-stage approach named Hunyuan3D-1.0 including a lite version and a standard version, that both support text- and image-conditioned generation. In the first stage, we employ a multi-view diffusion model that efficiently generates multi-view RGB in approximately 4 seconds. These multi-view images capture rich details of the 3D asset from different viewpoints, relaxing the tasks from single-view to multi-view reconstruction. In the second stage, we introduce a feed-forward reconstruction model that rapidly and faithfully reconstructs the 3D asset given the generated multi-view images in approximately 7 seconds. The reconstruction network learns to handle noises and in-consistency introduced by the multi-view diffusion and leverages the available information from the condition image to efficiently recover the 3D structure. Our framework involves the text-to-image model, i.e., Hunyuan-DiT, making it a unified framework to support both text- and image-conditioned 3D generation. Our standard version has 3x more parameters than our lite and other existing model. Our Hunyuan3D-1.0 achieves an impressive balance between speed and quality, significantly reducing generation time while maintaining the quality and diversity of the produced assets.
CMATH: Cross-Modality Augmented Transformer with Hierarchical Variational Distillation for Multimodal Emotion Recognition in Conversation
Zhu, Xiaofei, Cheng, Jiawei, Yang, Zhou, Chen, Zhuo, Wang, Qingyang, Yao, Jianfeng
Multimodal emotion recognition in conversation (MER) aims to accurately identify emotions in conversational utterances by integrating multimodal information. Previous methods usually treat multimodal information as equal quality and employ symmetric architectures to conduct multimodal fusion. However, in reality, the quality of different modalities usually varies considerably, and utilizing a symmetric architecture is difficult to accurately recognize conversational emotions when dealing with uneven modal information. Furthermore, fusing multi-modality information in a single granularity may fail to adequately integrate modal information, exacerbating the inaccuracy in emotion recognition. In this paper, we propose a novel Cross-Modality Augmented Transformer with Hierarchical Variational Distillation, called CMATH, which consists of two major components, i.e., Multimodal Interaction Fusion and Hierarchical Variational Distillation. The former is comprised of two submodules, including Modality Reconstruction and Cross-Modality Augmented Transformer (CMA-Transformer), where Modality Reconstruction focuses on obtaining high-quality compressed representation of each modality, and CMA-Transformer adopts an asymmetric fusion strategy which treats one modality as the central modality and takes others as auxiliary modalities. The latter first designs a variational fusion network to fuse the fine-grained representations learned by CMA- Transformer into a coarse-grained representations. Then, it introduces a hierarchical distillation framework to maintain the consistency between modality representations with different granularities. Experiments on the IEMOCAP and MELD datasets demonstrate that our proposed model outperforms previous state-of-the-art baselines. Implementation codes can be available at https://github.com/ cjw-MER/CMATH.
Exploring Knowledge Boundaries in Large Language Models for Retrieval Judgment
Zhang, Zhen, Wang, Xinyu, Jiang, Yong, Chen, Zhuo, Mu, Feiteng, Hu, Mengting, Xie, Pengjun, Huang, Fei
Large Language Models (LLMs) are increasingly recognized for their practical applications. However, these models often encounter challenges in dynamically changing knowledge, as well as in managing unknown static knowledge. Retrieval-Augmented Generation (RAG) tackles this challenge and has shown a significant impact on LLMs. Actually, we find that the impact of RAG on the question answering capabilities of LLMs can be categorized into three groups: beneficial, neutral, and harmful. By minimizing retrieval requests that yield neutral or harmful results, we can effectively reduce both time and computational costs, while also improving the overall performance of LLMs. This insight motivates us to differentiate between types of questions using certain metrics as indicators, to decrease the retrieval ratio without compromising performance. In our work, we propose a method that is able to identify different types of questions from this view by training a Knowledge Boundary Model (KBM). Experiments conducted on 11 English and Chinese datasets illustrate that the KBM effectively delineates the knowledge boundary, significantly decreasing the proportion of retrievals required for optimal end-to-end performance. Specifically, we evaluate the effectiveness of KBM in three complex scenarios: dynamic knowledge, long-tail static knowledge, and multi-hop problems, as well as its functionality as an external LLM plug-in. As Large Language Models (LLMs) evolve, their real-world applications expand, yet they often struggle with dynamically changing and unknown static knowledge, leading to inaccuracies or hallucinations (Rawte et al., 2023). Retrieval-Augmented Generation (RAG) effectively addresses these challenges by retrieving relevant external information in real time, enhancing LLMs' ability to provide accurate responses. While RAG can significantly boost performance, it also incurs costs, such as increased retrieval requests and longer response times.
MKGL: Mastery of a Three-Word Language
Guo, Lingbing, Bo, Zhongpu, Chen, Zhuo, Zhang, Yichi, Chen, Jiaoyan, Lan, Yarong, Sun, Mengshu, Zhang, Zhiqiang, Luo, Yangyifei, Li, Qian, Zhang, Qiang, Zhang, Wen, Chen, Huajun
Large language models (LLMs) have significantly advanced performance across a spectrum of natural language processing (NLP) tasks. Yet, their application to knowledge graphs (KGs), which describe facts in the form of triplets and allow minimal hallucinations, remains an underexplored frontier. In this paper, we investigate the integration of LLMs with KGs by introducing a specialized KG Language (KGL), where a sentence precisely consists of an entity noun, a relation verb, and ends with another entity noun. Despite KGL's unfamiliar vocabulary to the LLM, we facilitate its learning through a tailored dictionary and illustrative sentences, and enhance context understanding via real-time KG context retrieval and KGL token embedding augmentation. Our results reveal that LLMs can achieve fluency in KGL, drastically reducing errors compared to conventional KG embedding methods on KG completion. Furthermore, our enhanced LLM shows exceptional competence in generating accurate three-word sentences from an initial entity and interpreting new unseen terms out of KGs.
Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models
Chen, Zhuo, Liu, Jiawei, Liu, Haotan, Cheng, Qikai, Zhang, Fan, Lu, Wei, Liu, Xiaozhong
Retrieval-Augmented Generation (RAG) is applied to solve hallucination problems and real-time constraints of large language models, but it also induces vulnerabilities against retrieval corruption attacks. Existing research mainly explores the unreliability of RAG in white-box and closed-domain QA tasks. In this paper, we aim to reveal the vulnerabilities of Retrieval-Enhanced Generative (RAG) models when faced with black-box attacks for opinion manipulation. We explore the impact of such attacks on user cognition and decision-making, providing new insight to enhance the reliability and security of RAG models. We manipulate the ranking results of the retrieval model in RAG with instruction and use these results as data to train a surrogate model. By employing adversarial retrieval attack methods to the surrogate model, black-box transfer attacks on RAG are further realized. Experiments conducted on opinion datasets across multiple topics show that the proposed attack strategy can significantly alter the opinion polarity of the content generated by RAG. This demonstrates the model's vulnerability and, more importantly, reveals the potential negative impact on user cognition and decision-making, making it easier to mislead users into accepting incorrect or biased information.
Temporal Knowledge Graph Question Answering: A Survey
Su, Miao, Li, Zixuan, Chen, Zhuo, Bai, Long, Jin, Xiaolong, Guo, Jiafeng
Knowledge Base Question Answering (KBQA) has been a long-standing field to answer questions based on knowledge bases. Recently, the evolving dynamics of knowledge have attracted a growing interest in Temporal Knowledge Graph Question Answering (TKGQA), an emerging task to answer temporal questions. However, this field grapples with ambiguities in defining temporal questions and lacks a systematic categorization of existing methods for TKGQA. In response, this paper provides a thorough survey from two perspectives: the taxonomy of temporal questions and the methodological categorization for TKGQA. Specifically, we first establish a detailed taxonomy of temporal questions engaged in prior studies. Subsequently, we provide a comprehensive review of TKGQA techniques of two categories: semantic parsing-based and TKG embedding-based. Building on this review, the paper outlines potential research directions aimed at advancing the field of TKGQA. This work aims to serve as a comprehensive reference for TKGQA and to stimulate further research.
Improve ROI with Causal Learning and Conformal Prediction
Ai, Meng, Chen, Zhuo, Wang, Jibin, Shang, Jing, Tao, Tao, Li, Zhen
Abstract--In the commercial sphere, such as operations and maintenance, advertising, and marketing recommendations, intelligent decision-making utilizing data mining and neural network technologies is crucial, especially in resource allocation to optimize ROI. This study delves into the Cost-aware Binary Treatment Assignment Problem (C-BTAP) across different industries, with a focus on the state-of-the-art Direct ROI Prediction (DRP) method. A larger area under the curve indicates better performance. Three popular methods have been proposed for tackling C-BTAP: 1) Two-Phase I. First, TPM utilized uplift models, such as In a wide range of commercial activities, intelligent decisionmaking meta-learners [11], [12], causal forests [6], [13]-[15], or neural based on data mining and neural network technologies network based representation learning [16]-[18] approaches, is playing an increasingly important role. One crucial aspect of to predict the revenue lift and cost lift, respectively. Then, this intelligent decision-making is figuring out how to allocate a calculation is performed by dividing the revenue uplift limited resources in order to maximize returns, essentially prediction by the cost uplift prediction. For instance, of revenue uplift model and cost uplift model may cause an in the field of operations and maintenance, how to allocate enlargement of model errors due to the mathematical operations machine resources and computational power to maximize the during combination; 2) For the method of Direct Rank (DR), revenue of supported businesses [1]; in the advertising sector, a loss function aimed at ranking individuals' ROI is created, how to distribute an advertiser's total budget reasonably to as noted in [9]. However, [5] demonstrate that achieving maximize the revenue from their products [2]; and in the accurate ranking is not possible when the loss function fully realms of recommendation and marketing, how to allocate converges because the loss function is not convex, which is suitable coupons, discounts, and coins as incentives to users in also detailed in Appendix E of [5]; 3) based on our research order to maximize platform user retention, GMV, etc [3]-[8]. of the published literature, the Direct ROI Prediction (DRP) In causal inference, actions such as adjusting the computational method [5], presented at AAAI 2023, remains the state-ofthe-art power for a specific business operation, modulating (SOTA) for C-BTAP so far. DRP designs a convex the cost of a particular advertisement, and offering incentives loss function for neural networks to guarantee an unbiased of varying value, as mentioned in the above examples, are estimation of ROI of individuals when the loss converges.
Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts
Chen, Zhuo, Wang, Xinyu, Jiang, Yong, Xie, Pengjun, Huang, Fei, Tu, Kewei
In the era of large language models, applying techniques such as Retrieval Augmented Generation can better address Open-Domain Question-Answering problems. Due to constraints including model sizes and computing resources, the length of context is often limited, and it becomes challenging to empower the model to cover overlong contexts while answering questions from open domains. This paper proposes a general and convenient method to covering longer contexts in Open-Domain Question-Answering tasks. It leverages a small encoder language model that effectively encodes contexts, and the encoding applies cross-attention with origin inputs. With our method, the origin language models can cover several times longer contexts while keeping the computing requirements close to the baseline. Our experiments demonstrate that after fine-tuning, there is improved performance across two held-in datasets, four held-out datasets, and also in two In Context Learning settings.