Goto

Collaborating Authors

 Fang, Li


Can Text-based Knowledge Graph Completion Benefit From Zero-Shot Large Language Models?

arXiv.org Artificial Intelligence

Text-based knowledge graph completion (KGC) methods, leveraging textual entity descriptions are at the research forefront. The efficacy of these models hinges on the quality of the textual data. This study explores whether enriched or more efficient textual descriptions can amplify model performance. Recently, Large Language Models (LLMs) have shown remarkable improvements in NLP tasks, attributed to their sophisticated text generation and conversational capabilities. LLMs assimilate linguistic patterns and integrate knowledge from their training data. Compared to traditional databases like Wikipedia, LLMs provide several advantages, facilitating broader information querying and content augmentation. We hypothesize that LLMs, without fine-tuning, can refine entity descriptions, serving as an auxiliary knowledge source. An in-depth analysis was conducted to verify this hypothesis. We found that (1) without fine-tuning, LLMs have the capability to further improve the quality of entity text descriptions. We validated this through experiments on the FB15K-237 and WN18RR datasets. (2) LLMs exhibit text generation hallucination issues and selectively output words with multiple meanings. This was mitigated by contextualizing prompts to constrain LLM outputs. (3) Larger model sizes do not necessarily guarantee better performance; even the 7B model can achieve optimized results in this comparative task. These findings underscore the untapped potential of large models in text-based KGC, which is a promising direction for further research in KGC. The code and datasets are accessible at \href{https://github.com/sjlmg/CP-KGC}.


Shape-programmable Adaptive Multi-material Microrobots for Biomedical Applications

arXiv.org Artificial Intelligence

Abstract: Flagellated microorganisms can swim at low Reynolds numbers and adapt to changes in their environment. Specifically, the flagella can switch their shapes or modes through gene expression. In the past decade, efforts have been made to fabricate and investigate rigid types of microrobots without any adaptation to the environments. More recently, obtaining adaptive microrobots mimicking real microorganisms is getting more attention. However, even though some adaptive microrobots achieved by hydrogels have emerged, the swimming behaviors of the microrobots before and after the environment-induced deformations are not predicted in a systematic standardized way. In this work, experiments, finite element analysis, and dynamic modeling are presented together to realize a complete understanding of these adaptive microrobots. The above three parts are cross-verified proving the success of using such methods, facilitating the bio-applications with shape-programmable and even swimming performance-programmable microrobots. Moreover, an application of targeted object delivery using the proposed microrobot has been successfully demonstrated. Finally, cytotoxicity tests are performed to prove the potential for using the proposed microrobot for biomedical applications. One-Sentence Summary: A systematic approach to design shape-programable, dual-function, and adaptive microrobots for biomedical applications. Main Text: INTRODUCTION Microorganisms are capable of swimming with flagella to provide motility (1-3). These microorganisms can adapt their flagella into different shapes or modes by altering gene expression to accommodate environmental changes or for other proposes like nutrition, hosting, and invasion (4). For example, the flagella of a spermatozoon of Echinus esculentus will result in a transition from a planar to a helical shape when the viscosity is increased and back to a quasi-planar shape when it is further increased (5). Moreover, recent investigations show that the flagella can deform to wrap around the cell body to escape from traps or to enhance the efficiency of environmental exploration (6, 7). Inspired by these natural living beings, many microrobots have been fabricated to swim in this microscale world. The two strategies most adopted to achieve motility are the helical structures mimicking the flagella of bacterial E. coli and the flexible body replicating the motion of a spermatozoa (8). In the last decade, various helical-type microrobots are realized with fixed shapes, i.e., the structure will not change once it is fabricated (9-11).


Deciphering the Language of Nature: A transformer-based language model for deleterious mutations in proteins

arXiv.org Artificial Intelligence

Various machine-learning models, including deep neural network models, have already been developed to predict deleteriousness of missense (non-synonymous) mutations. Potential improvements to the current state of the art, however, may still benefit from a fresh look at the biological problem using more sophisticated self-adaptive machine-learning approaches. Recent advances in the natural language processing field show transformer models-a type of deep neural network-to be particularly powerful at modeling sequence information with context dependence. In this study, we introduce MutFormer, a transformer-based model for the prediction of deleterious missense mutations, which uses reference and mutated protein sequences from the human genome as the primary features. MutFormer takes advantage of a combination of self-attention layers and convolutional layers to learn both long-range and short-range dependencies between amino acid mutations in a protein sequence. In this study, we first pre-trained MutFormer on reference protein sequences and mutated protein sequences resulting from common genetic variants observed in human populations. We next examined different fine-tuning methods to successfully apply the model to deleteriousness prediction of missense mutations. Finally, we evaluated MutFormer's performance on multiple testing data sets. We found that MutFormer showed similar or improved performance over a variety of existing tools, including those that used conventional machine-learning approaches. We conclude that MutFormer successfully considers sequence features that are not explored in previous studies and could potentially complement existing computational predictions or empirically generated functional scores to improve our understanding of disease variants.


Bioformer: an efficient transformer language model for biomedical text mining

arXiv.org Artificial Intelligence

Pretrained language models such as Bidirectional Encoder Representations from Transformers (BERT) have achieved state-of-the-art performance in natural language processing (NLP) tasks. Recently, BERT has been adapted to the biomedical domain. Despite the effectiveness, these models have hundreds of millions of parameters and are computationally expensive when applied to large-scale NLP applications. We hypothesized that the number of parameters of the original BERT can be dramatically reduced with minor impact on performance. In this study, we present Bioformer, a compact BERT model for biomedical text mining. We pretrained two Bioformer models (named Bioformer8L and Bioformer16L) which reduced the model size by 60% compared to BERTBase. Bioformer uses a biomedical vocabulary and was pre-trained from scratch on PubMed abstracts and PubMed Central full-text articles. We thoroughly evaluated the performance of Bioformer as well as existing biomedical BERT models including BioBERT and PubMedBERT on 15 benchmark datasets of four different biomedical NLP tasks: named entity recognition, relation extraction, question answering and document classification. The results show that with 60% fewer parameters, Bioformer16L is only 0.1% less accurate than PubMedBERT while Bioformer8L is 0.9% less accurate than PubMedBERT. Both Bioformer16L and Bioformer8L outperformed BioBERTBase-v1.1. In addition, Bioformer16L and Bioformer8L are 2-3 fold as fast as PubMedBERT/BioBERTBase-v1.1. Bioformer has been successfully deployed to PubTator Central providing gene annotations over 35 million PubMed abstracts and 5 million PubMed Central full-text articles. We make Bioformer publicly available via https://github.com/WGLab/bioformer, including pre-trained models, datasets, and instructions for downstream use.


Deep Clustering With Intra-class Distance Constraint for Hyperspectral Images

arXiv.org Machine Learning

The high dimensionality of hyperspectral images often results in the degradation of clustering performance. Due to the powerful ability of deep feature extraction and non-linear feature representation, the clustering algorithm based on deep learning has become a hot research topic in the field of hyperspectral remote sensing. However, most deep clustering algorithms for hyperspectral images utilize deep neural networks as feature extractor without considering prior knowledge constraints that are suitable for clustering. To solve this problem, we propose an intra-class distance constrained deep clustering algorithm for high-dimensional hyperspectral images. The proposed algorithm constrains the feature mapping procedure of the auto-encoder network by intra-class distance so that raw images are transformed from the original high-dimensional space to the low-dimensional feature space that is more conducive to clustering. Furthermore, the related learning process is treated as a joint optimization problem of deep feature extraction and clustering. Experimental results demonstrate the intense competitiveness of the proposed algorithm in comparison with state-of-the-art clustering methods of hyperspectral images.