Kong, Weikun
A Decoupling and Aggregating Framework for Joint Extraction of Entities and Relations
Wang, Yao, Liu, Xin, Kong, Weikun, Yu, Hai-Tao, Racharak, Teeradaj, Kim, Kyoung-Sook, Nguyen, Minh Le
Named Entity Recognition and Relation Extraction are two crucial and challenging subtasks in the field of Information Extraction. Despite the successes achieved by the traditional approaches, fundamental research questions remain open. First, most recent studies use parameter sharing for a single subtask or shared features for both two subtasks, ignoring their semantic differences. Second, information interaction mainly focuses on the two subtasks, leaving the fine-grained informtion interaction among the subtask-specific features of encoding subjects, relations, and objects unexplored. Motivated by the aforementioned limitations, we propose a novel model to jointly extract entities and relations. The main novelties are as follows: (1) We propose to decouple the feature encoding process into three parts, namely encoding subjects, encoding objects, and encoding relations. Thanks to this, we are able to use fine-grained subtask-specific features. The experimental results demonstrate that our model outperforms several previous state-of-the-art models. Extensive additional experiments further confirm the effectiveness of our model. A Decoupling and Aggregating Framework for Joint Extraction of Entities and Relations Introduction Named Entity Recognition (NER) and Relation Extraction (RE), as two essential subtasks in information extraction, aim to extract entities and relations from semi-structured and unstructured texts. They are used in many downstream applications in different domains, such as knowledge graph construction [38, 39], Question-Answering [36, 37], and knowledge graph-based recommendation system [40, 41]. Most traditional models and some methods used in specialized areas [9,35,43,46] construct separate models for NER and RE to extract entities and relations in a pipelined manner. This type of method suffers from error propagation and unilateral information interaction.
DA-TransUNet: Integrating Spatial and Channel Dual Attention with Transformer U-Net for Medical Image Segmentation
Sun, Guanqun, Pan, Yizhi, Kong, Weikun, Xu, Zichang, Ma, Jianhua, Racharak, Teeradaj, Nguyen, Le-Minh, Xin, Junyi
Accurate medical image segmentation is critical for disease quantification and treatment evaluation. While traditional Unet architectures and their transformer-integrated variants excel in automated segmentation tasks. However, they lack the ability to harness the intrinsic position and channel features of image. Existing models also struggle with parameter efficiency and computational complexity, often due to the extensive use of Transformers. To address these issues, this study proposes a novel deep medical image segmentation framework, called DA-TransUNet, aiming to integrate the Transformer and dual attention block(DA-Block) into the traditional U-shaped architecture. Unlike earlier transformer-based U-net models, DA-TransUNet utilizes Transformers and DA-Block to integrate not only global and local features, but also image-specific positional and channel features, improving the performance of medical image segmentation. By incorporating a DA-Block at the embedding layer and within each skip connection layer, we substantially enhance feature extraction capabilities and improve the efficiency of the encoder-decoder structure. DA-TransUNet demonstrates superior performance in medical image segmentation tasks, consistently outperforming state-of-the-art techniques across multiple datasets. In summary, DA-TransUNet offers a significant advancement in medical image segmentation, providing an effective and powerful alternative to existing techniques. Our architecture stands out for its ability to improve segmentation accuracy, thereby advancing the field of automated medical image diagnostics. The codes and parameters of our model will be publicly available at https://github.com/SUN-1024/DA-TransUnet.