Baghershahi, Peyman
Temporal Relation Extraction in Clinical Texts: A Span-based Graph Transformer Approach
Chaturvedi, Rochana, Baghershahi, Peyman, Medya, Sourav, Di Eugenio, Barbara
Temporal information extraction from unstructured text is essential for contextualizing events and deriving actionable insights, particularly in the medical domain. We address the task of extracting clinical events and their temporal relations using the well-studied I2B2 2012 Temporal Relations Challenge corpus. This task is inherently challenging due to complex clinical language, long documents, and sparse annotations. We introduce GRAPHTREX, a novel method integrating span-based entity-relation extraction, clinical large pre-trained language models (LPLMs), and Heterogeneous Graph Transformers (HGT) to capture local and global dependencies. Our HGT component facilitates information propagation across the document through innovative global landmarks that bridge distant entities. Our method improves the state-of-the-art with 5.5% improvement in the tempeval $F_1$ score over the previous best and up to 8.9% improvement on long-range relations, which presents a formidable challenge. This work not only advances temporal information extraction but also lays the groundwork for improved diagnostic and prognostic models through enhanced temporal reasoning.
Design Requirements for Human-Centered Graph Neural Network Explanations
Habibi, Pantea, Baghershahi, Peyman, Medya, Sourav, Chattopadhyay, Debaleena
Graph neural networks (GNNs) are powerful graph-based machine-learning models that are popular in various domains, e.g., social media, transportation, and drug discovery. However, owing to complex data representations, GNNs do not easily allow for human-intelligible explanations of their predictions, which can decrease trust in them as well as deter any collaboration opportunities between the AI expert and non-technical, domain expert. Here, we first discuss the two papers that aim to provide GNN explanations to domain experts in an accessible manner and then establish a set of design requirements for human-centered GNN explanations. Finally, we offer two example prototypes to demonstrate some of those proposed requirements.
Efficient Relation-aware Neighborhood Aggregation in Graph Neural Networks via Tensor Decomposition
Baghershahi, Peyman, Hosseini, Reshad, Moradi, Hadi
Many Graph Neural Networks (GNNs) are proposed for Knowledge Graph Embedding (KGE). However, lots of these methods neglect the importance of the information of relations and combine it with the information of entities inefficiently, leading to low expressiveness. To address this issue, we introduce a general knowledge graph encoder incorporating tensor decomposition in the aggregation function of Relational Graph Convolutional Network (R-GCN). In our model, neighbor entities are transformed using projection matrices of a low-rank tensor which are defined by relation types to benefit from multi-task learning and produce expressive relation-aware representations. Besides, we propose a low-rank estimation of the core tensor using CP decomposition to compress and regularize our model. We use a training method inspired by contrastive learning, which relieves the training limitation of the 1-N method on huge graphs. We achieve favorably competitive results on FB15k-237 and WN18RR with embeddings in comparably lower dimensions.