Meta-GCN: A Dynamically Weighted Loss Minimization Method for Dealing with the Data Imbalance in Graph Neural Networks

Mohammadizadeh, Mahdi, Mozhdehi, Arash, Ioannou, Yani, Wang, Xin

arXiv.org Artificial Intelligence 

Graph structures are effectively capable of describing the complex relationship between the objects, i.e. nodes, through edges. Besides, Graph-based representation is an effective method for feature dimensionality reduction [4, 5]. GNNs, as powerful tools for representational learning on graph-structured data, have attracted increasing attention in recent years. GNNs are used for effective deep representational learning to perform graph analysis for tasks such as node classification, link prediction, and clustering in Euclidean and non-Euclidean domains [6]. Among the proposed methods for learning representations on graphs, GCNs proposed by Kipf et al. [7] proved to be a simple and effective GNN model. This model is able to learn hidden representations comprising both node features and local graph structure while scaling linearly relative to the number of edges in the given graph. Most classification algorithms, in GNNs, tend to minimize the average loss over all training examples which produces reasonable outcomes for class-balanced datasets.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found