Learning Word Representations from Relational Graphs

Bollegala, Danushka (The University of Liverpool) | Maehara, Takanori (National Institute of Informatics) | Yoshida, Yuichi (National Institute of Informatics) | Kawarabayashi, Ken-ichi (National Institute of Informatics)

AAAI Conferences 

If we already know a particular concept representations by considering the semantic relations between such as pets, we can describe a new concept such as dogs words. Specifically, given as input a relational graph, by stating the semantic relations that the new concept shares a directed labelled weighted graph where vertices represent with the existing concepts such as dogs belongs-to pets. Alternatively, words and edges represent numerous semantic relations we could describe a novel concept by listing all that exist between the corresponding words, we consider the the attributes it shares with existing concepts. In our example, problem of learning a vector representation for each vertex we can describe the concept dog by listing attributes (word) in the graph and a matrix representation for each label such as mammal, carnivorous, and domestic animal that it type (pattern). The learnt word representations are evaluated shares with another concept such as the cat. Therefore, both for their accuracy by using them to solve semantic word attributes and relations can be considered as alternative descriptors analogy questions on a benchmark dataset. of the same knowledge. This close connection between Our task of learning word attributes using relations between attributes and relations can be seen in knowledge representation words is challenging because of several reasons. First, schemes such as predicate logic, where attributes there can be multiple semantic relations between two words.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found