Cambria, Erik (Nanyang Technological University) | Poria, Soujanya (Nanyang Technological University) | Hazarika, Devamanyu (National University of Singapore) | Kwok, Kenneth (Institute of High Performance Computing, A*STAR)
With the recent development of deep learning, research in AI has gained new vigor and prominence. While machine learning has succeeded in revitalizing many research fields, such as computer vision, speech recognition, and medical diagnosis, we are yet to witness impressive progress in natural language understanding. One of the reasons behind this unmatched expectation is that, while a bottom-up approach is feasible for pattern recognition, reasoning and understanding often require a top-down approach. In this work, we couple sub-symbolic and symbolic AI to automatically discover conceptual primitives from text and link them to commonsense concepts and named entities in a new three-level knowledge representation for sentiment analysis. In particular, we employ recurrent neural networks to infer primitives by lexical substitution and use them for grounding common and commonsense knowledge by means of multi-dimensional scaling.
This paper introduces a new method to classify sentiment polarity for aspects in product reviews. We call it bitmask bidirectional long short term memory networks. It is based on long short term memory (LSTM) networks, which is a frequently mentioned model in natural language processing. Our proposed method uses a bitmask layer to keep attention on aspects. We evaluate it on reviews of restaurant and laptop domains from three popular contests: SemEval-2014 task 4, SemEval-2015 task 12, and SemEval-2016 task 5. It obtains competitive results with state-of-the-art methods based on LSTM networks. Furthermore, we demonstrate the benefit of using sentiment lexicons and word embeddings of a particular domain in aspect-based sentiment analysis.
Aspect-Based Sentiment Analysis (ABSA) deals with the extraction of sentiments and their targets. Collecting labeled data for this task in order to help neural networks generalize better can be laborious and time-consuming. As an alternative, similar data to the real-world examples can be produced artificially through an adversarial process which is carried out in the embedding space. Although these examples are not real sentences, they have been shown to act as a regularization method which can make neural networks more robust. In this work, we apply adversarial training, which was put forward by Goodfellow et al. (2014), to the post-trained BERT (BERT-PT) language model proposed by Xu et al. (2019) on the two major tasks of Aspect Extraction and Aspect Sentiment Classification in sentiment analysis. After improving the results of post-trained BERT by an ablation study, we propose a novel architecture called BERT Adversarial Training (BAT) to utilize adversarial training in ABSA. The proposed model outperforms post-trained BERT in both tasks. To the best of our knowledge, this is the first study on the application of adversarial training in ABSA.
Predicting the affective valence of unknown multi-word expressions is key for concept-level sentiment analysis. AffectiveSpace 2 is a vector space model, built by means of random projection, that allows for reasoning by analogy on natural language con- cepts. By reducing the dimensionality of affec- tive common-sense knowledge, the model allows semantic features associated with concepts to be generalized and, hence, allows concepts to be intu- itively clustered according to their semantic and affective relatedness. Such an affective intuition (so called because it does not rely on explicit fea- tures, but rather on implicit analogies) enables the inference of emotions and polarity conveyed by multi-word expressions, thus achieving efficient concept-level sentiment analysis.
Web 2.0 has changed the ways people communicate, collaborate, and express their opinions and sentiments. But despite social data on the Web being perfectly suitable for human consumption, they remain hardly accessible to machines. To bridge the cognitive and affective gap between word-level natural language data and the concept-level sentiments conveyed by them, we developed SenticNet 2, a publicly available semantic and affective resource for opinion mining and sentiment analysis. SenticNet 2 is built by means of sentic computing, a new paradigm that exploits both AI and Semantic Web techniques to better recognize, interpret, and process natural language opinions. By providing the semantics and sentics (that is, the cognitive and affective information) associated with over 14,000 concepts, SenticNet 2 represents one of the most comprehensive semantic resources for the development of affect-sensitive applications in fields such as social data mining, multimodal affective HCI, and social media marketing.