Learning Discrete Structures for Graph Neural Networks

Franceschi, Luca, Niepert, Mathias, Pontil, Massimiliano, He, Xiao

arXiv.org Machine Learning 

Relational learning is concerned with methods that cannot only leverage the attributes of data points but also their relationships. Diagnosing a patient, for example, not only depends on the patient's vitals and demographic information but also on the same information about their relatives, the information about the hospitals they have visited, and so on. Relational learning, therefore, does not make the assumption of independence between data points but models their dependency explicitly. Graphs are a natural way to represent relational information and there is a large number of machine learning algorithms leveraging graph structure. Graph neural networks (GNNs) (Scarselli et al., 2009) are one such class of algorithms that are able to incorporate sparse and discrete dependency structures between data points. While a graph structure is available in some domains, in others it has to be inferred or constructed. A possible approach is to first create a k-nearest neighbor (kNN) graph based on some measure of similarity between data points. This is a common strategy used by several learning methods such as LLE (Roweis & Saul, 2000) and Isomap (Tenenbaum et al., 2000).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found