Goto

Collaborating Authors

 Peng, Jing


CATs are Fuzzy PETs: A Corpus and Analysis of Potentially Euphemistic Terms

arXiv.org Artificial Intelligence

Euphemisms have not received much attention in natural language processing, despite being an important element of polite and figurative language. Euphemisms prove to be a difficult topic, not only because they are subject to language change, but also because humans may not agree on what is a euphemism and what is not. Nevertheless, the first step to tackling the issue is to collect and analyze examples of euphemisms. We present a corpus of potentially euphemistic terms (PETs) along with example texts from the GloWbE corpus. Additionally, we present a subcorpus of texts where these PETs are not being used euphemistically, which may be useful for future applications. We also discuss the results of multiple analyses run on the corpus. Firstly, we find that sentiment analysis on the euphemistic texts supports that PETs generally decrease negative and offensive sentiment. Secondly, we observe cases of disagreement in an annotation task, where humans are asked to label PETs as euphemistic or not in a subset of our corpus text examples. We attribute the disagreement to a variety of potential reasons, including if the PET was a commonly accepted term (CAT).


Learning Action Models from Disordered and Noisy Plan Traces

arXiv.org Artificial Intelligence

There is increasing awareness in the planning community that the burden of specifying complete domain models is too high, which impedes the applicability of planning technology in many real-world domains. Although there have many learning systems that help automatically learning domain models, most existing work assumes that the input traces are completely correct. A more realistic situation is that the plan traces are disordered and noisy, such as plan traces described by natural language. In this paper we propose and evaluate an approach for doing this. Our approach takes as input a set of plan traces with disordered actions and noise and outputs action models that can best explain the plan traces. We use a MAX-SAT framework for learning, where the constraints are derived from the given plan traces. Unlike traditional action models learners, the states in plan traces can be partially observable and noisy as well as the actions in plan traces can be disordered and parallel. We demonstrate the effectiveness of our approach through a systematic empirical evaluation with both IPC domains and the real-world dataset extracted from natural language documents.


Dynamic Shared Context Processing in an E-Collaborative Learning Environment

arXiv.org Artificial Intelligence

In this paper, we propose a dynamic shared context processing method based on DSC (Dynamic Shared Context) model, applied in an e-collaborative learning environment. Firstly, we present the model. This is a way to measure the relevance between events and roles in collaborative environments. With this method, we can share the most appropriate event information for each role instead of sharing all information to all roles in a collaborative work environment. Then, we apply and verify this method in our project with Google App supported e-learning collaborative environment. During this experiment, we compared DSC method measured relevance of events and roles to manual measured relevance. And we describe the favorable points from this comparison and our finding. Finally, we discuss our future research of a hybrid DSC method to make dynamical information shared more effective in a collaborative work environment.


Intelligent Time-Aware Query Translation for Text Sources

AAAI Conferences

This paper describes a system called SITAC based on our proposed approach to discover concepts (called SITACs) in text archives that are identical semantically but alter their names over time. Our approach integrates natural language processing, association rule mining and contextual similarity to discover SITACs in order to answer historical queries over text corpora.


An Adaptive Metric Machine for Pattern Classification

Neural Information Processing Systems

Nearest neighbor classification assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with finite samples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose a locally adaptive nearest neighbor classification method to try to minimize bias. We use a Chi-squared distance analysis to compute a flexible metric for producing neighborhoods that are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities tend to be smoother in the modified neighborhoods, whereby better classification performance can be achieved. The efficacy of our method is validated and compared against other techniques using a variety of real world data. 1 Introduction


An Adaptive Metric Machine for Pattern Classification

Neural Information Processing Systems

Nearest neighbor classification assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with finite samples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose a locally adaptive nearest neighbor classification method to try to minimize bias. We use a Chi-squared distance analysis to compute a flexible metric for producing neighborhoods that are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities tend to be smoother in the modified neighborhoods, whereby better classification performance can be achieved. The efficacy of our method is validated and compared against other techniques using a variety of real world data. 1 Introduction


An Adaptive Metric Machine for Pattern Classification

Neural Information Processing Systems

Nearest neighbor classification assumes locally constant class conditional probabilities.This assumption becomes invalid in high dimensions with finite samples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose a locally adaptive nearest neighbor classification method to try to minimize bias. We use a Chi-squared distance analysis to compute a flexible metric for producing neighborhoodsthat are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities tend to be smoother in the modified neighborhoods,whereby better classification performance can be achieved. The efficacy of our method is validated and compared against other techniques using a variety of real world data. 1 Introduction