In this study, we use the computational textual analysis tool, the Gramulator, to identify and examine the distinctive linguistic features of deceptive and truthful discourse. The theme of the study is abortion rights and the deceptive texts are derived from a Devil’s Advocate approach, conducted to suppress personal beliefs and values. Our study takes the form of a contrastive corpus analysis, and produces systematic differences between truthful and deceptive personal accounts. Results suggest that deceivers employ a distancing strategy that is often associated with deceptive linguistic behavior. Ultimately, these deceivers struggle to adopt a truth perspective. Perhaps of most importance, our results indicate issues of concern with current deception detection theory and methodology. From a theoretical standpoint, our results question whether deceivers are deceiving at all or whether they are merely poorly expressing a rhetorical position, caused by being forced to speculate on a perceived proto-typical position. From a methodological standpoint, our results cause us to question the validity of deception corpora. Consequently, we propose new rigorous standards so as to better understand the subject matter of the deception field. Finally, we question the prevailing approach of abstract data measurement and call for future assessment to consider contextual lexical features. We conclude by suggesting a prudent approach to future research for fear that our eagerness to analyze and theorize may cause us to misidentify deception. After-all, successful deception, which is the kind we seek to detect, is likely to be an elusive and fickle prey.
The most popular Topic Modeling algorithm is LDA, Latent Dirichlet Allocation. Let's first unravel this imposing name to have an intuition of what it does. Figure 1 below describes how the LDA steps articulate to find the topics within a corpus of documents. "A document is generated by sampling a mixture of these topics and then sampling words from that mixture" (Andrew Ng, David Blei and Michael Jordan from the LDA original paper). NB: In the Figure 1 above, we have set K 3 topics and N 8 words in our vocabulary for illustration ease.
The ability to transfer styles of texts or images, is an important measurement of the advancement of artificial intelligence (AI). However, the progress in language style transfer is lagged behind other domains, such as computer vision, mainly because of the lack of parallel data and reliable evaluation metrics. In response to the challenge of lacking parallel data, we explore learning style transfer from non-parallel data. We propose two models to achieve this goal. The key idea behind the proposed models is to learn separate content representations and style representations using adversarial networks. Considering the problem of lacking principle evaluation metrics, we propose two novel evaluation metrics that measure two aspects of style transfer: transfer strength and content preservation. We benchmark our models and the evaluation metrics on two style transfer tasks: paper-news title transfer, and positive-negative review transfer. Results show that the proposed content preservation metric is highly correlate to human judgments, and the proposed models are able to generate sentences with similar content preservation score but higher style transfer strength comparing to auto-encoder.
The quality and nature of knowledge that can be found by an automated knowledge-extraction system depends on its inputs. For systems that learn by reading text, the Web offers a breadth of topics and currency, but it also presents the problems of dealing with casual, unedited writing, non-textual inputs, and the mingling of languages. The results of extraction using the KNEXT system on two Web corpora — Wikipedia and a collection of weblog entries — indicate that, with automatic filtering of the output, even ungrammatical writing on arbitrary topics can yield an extensive knowledge base, which human judges find to be of good quality, with propositions receiving an average score across both corpora of 2.34 (where the range is 1 to 5 and lower is better) versus 3.00 for unfiltered output from the same sources.
Analysis of mobile app reviews has shown its important role in requirement engineering, software maintenance and evolution of mobile apps. Mobile app developers check their users' reviews frequently to clarify the issues experienced by users or capture the new issues that are introduced due to a recent app update. App reviews have a dynamic nature and their discussed topics change over time. The changes in the topics among collected reviews for different versions of an app can reveal important issues about the app update. A main technique in this analysis is using topic modeling algorithms. However, app reviews are short texts and it is challenging to unveil their latent topics over time. Conventional topic models suffer from the sparsity of word co-occurrence patterns while inferring topics for short texts. Furthermore, these algorithms cannot capture topics over numerous consecutive time-slices. Online topic modeling algorithms speed up the inference of topic models for the texts collected in the latest time-slice by saving a fraction of data from the previous time-slice. But these algorithms do not analyze the statistical-data of all the previous time-slices, which can confer contributions to the topic distribution of the current time-slice. We propose Adaptive Online Biterm Topic Model (AOBTM) to model topics in short texts adaptively. AOBTM alleviates the sparsity problem in short-texts and considers the statistical-data for an optimal number of previous time-slices. We also propose parallel algorithms to automatically determine the optimal number of topics and the best number of previous versions that should be considered in topic inference phase. Automatic evaluation on collections of app reviews and real-world short text datasets confirm that AOBTM can find more coherent topics and outperforms the state-of-the-art baselines.