Impact of Lexical Filtering on Overall Opinion Polarity Identification

AAAI Conferences

One approach to assessing overall opinion polarity (OvOP) of reviews, a concept defined in this paper, is the use of supervised machine learning mechanisms. In this paper, the impact of lexical filtering, applied to reviews, on the accuracy of two statistical classifiers (Naive Bayes and Markov Model) with respect to OvOP identification is observed. Two kinds of lexical filters, one based on hypernymy as provided by Word-Net (Fellbaum 1998), and one handcrafted filter based on part-of-speech (POS) tags, are evaluated. A ranking criterion based on a function of the probability of having positive or negative polarity is introduced and verified as being capable of achieving 100% accuracy with 10% recall. Movie reviews are used for training and evaluation of each statistical classifier, achieving 80% accuracy.


Persian Wordnet Construction using Supervised Learning

arXiv.org Machine Learning

This paper presents an automated supervised method for Persian wordnet construction. Using a Persian corpus and a bi-lingual dictionary, the initial links between Persian words and Princeton WordNet synsets have been generated. These links will be discriminated later as correct or incorrect by employing seven features in a trained classification system. The whole method is just a classification system, which has been trained on a train set containing FarsNet as a set of correct instances. State of the art results on the automatically derived Persian wordnet is achieved. The resulted wordnet with a precision of 91.18% includes more than 16,000 words and 22,000 synsets.


Learning Syntactic Patterns for Automatic Hypernym Discovery

Neural Information Processing Systems

Semantic taxonomies such as WordNet provide a rich source of knowledge fornatural language processing applications, but are expensive to build, maintain, and extend. Motivated by the problem of automatically constructing and extending such taxonomies, in this paper we present a new algorithm for automatically learning hypernym (is-a) relations from text. Our method generalizes earlier work that had relied on using small numbers of handcrafted regular expression patterns to identify hypernym pairs.Using "dependency path" features extracted from parse trees, we introduce a general-purpose formalization and generalization of these patterns. Given a training set of text containing known hypernym pairs, our algorithm automatically extracts useful dependency paths and applies them to new corpora to identify novel pairs. On our evaluation task (determining whethertwo nouns in a news article participate in a hypernym relationship), our automatically extracted database of hypernyms attains both higher precision and higher recall than WordNet.


An Automatic Method for Generating Sense Tagged Corpora

AAAI Conferences

The following resources have been used in developing and testing our method. WordNet 1 is a Machine Readable Dictionary developed at Princeton University by a group led by George Miller (Miller 1995), (Fellbaum 1998).


Automated Concept Extraction From Plain Text

AAAI Conferences

We do this by identifying relationships between words in the text based on a lexical database and identifying groups of these words which form closely tied conceptual groups. The word relationships are used to create a directed graph, called a Semantic Relationship Graph (SRG). This SRG a robust representation of the relationships between word senses which can be used to identify the individual concepts which occur in the text. We demonstrate the usefulness of this technique by creating a classifier based on SRGs which is considerably more accurate than a Naive Bayes text classifier. Introduction Mining textual information presents challenges over data mining of relational or transaction databases because text lacks any predefined fields, features or standard formats.