Plotting

 Sehanobish, Arijit


Gaining insight into SARS-CoV-2 infection and COVID-19 severity using self-supervised edge features and Graph Neural Networks

arXiv.org Machine Learning

However, most GNN schemes do not use edge features in learning new representations of graphical Graph Neural Networks (GNN) have been extensively data. Recently, edge features have been incorporated into used to extract meaningful representations GNNs to harness information describing different aspects from graph structured data and to perform predictive of the relationships between nodes [15, 13]. However, there tasks such as node classification and link are very few frameworks for creating de novo edge feature prediction. In recent years, there has been a lot vectors in a domain agnostic manner. In this article, using of work incorporating edge features along with Graph Attention Networks, we propose a self-supervised node features for prediction tasks. In this work, learning framework to create new edge features which can we present a framework for creating new edge be used to improve GNN performance in downstream node features, via a combination of self-supervised and classification tasks.


Using Chinese Glyphs for Named Entity Recognition

arXiv.org Artificial Intelligence

Most Named Entity Recognition (NER) systems use additional features like part-of-speech (POS) tags, shallow parsing, gazetteers, etc. Such kind of information requires external knowledge like unlabeled texts and trained taggers. Adding these features to NER systems have been shown to have a positive impact. However, sometimes creating gazetteers or taggers can take a lot of time and may require extensive data cleaning. In this paper for Chinese NER systems, we do not use these traditional features but we use lexicographic features of Chinese characters. Chinese characters are composed of graphical components called radicals and these components often have some semantic indicators. We propose CNN based models that incorporate this semantic information and use them for NER. Our models show an improvement over the baseline BERT-BiLSTM-CRF model. We set a new baseline score for Chinese OntoNotes v5.0 and show an improvement of +.64 F1 score. We present a state-of-the-art F1 score on Weibo dataset of 71.81 and show a competitive improvement of +0.72 over baseline on ResumeNER dataset.