Goto

Collaborating Authors

 Technology


Effect of Latency on Pursuit Problems

Birmingham, William Peter (Grove City College) | Rose, Shane (Grove City College) | Miller, Gregory (Grove City College) | Mahan, Matthew (Grove City College)

AAAI Conferences

We model the pursuit problem as a set of distributed agents communicating over a network subject to latency. Latency has serious deleterious effects on solving the pursuit problem. In this paper, we present a simple, yet effective way of dealing with latency that yields very good performance. Our method disperses predators within a region in which the prey may move that accounts for network latency.


Iterative Ontology Selection Guided by User for Building Domain Ontologies

Minyaoui, Asma (University of Sfax) | Gargouri, Faiez (University of Sfax)

AAAI Conferences

In this paper we present a new method for ontology selection in a reuse context. The novel feature of this method is the iterative selection of the reused ontologies. Ontology selection is guided by the user according to his requirements and his perception to the target domain. Starting from a first selected ontology, the concepts with the weakest density are identified then the ontology developer is enabled to choose among them the ones to be refined in order to cover a specific scope of the domain.


The Devil Is in the Details: New Directions in Deception Analysis

McCarthy, Philip Michael (The University of Memphis ) | Duran, Nicholas D. (University of California Merced) | Booker, Lucille M. (The University of Memphis)

AAAI Conferences

In this study, we use the computational textual analysis tool, the Gramulator, to identify and examine the distinctive linguistic features of deceptive and truthful discourse. The theme of the study is abortion rights and the deceptive texts are derived from a Devil’s Advocate approach, conducted to suppress personal beliefs and values. Our study takes the form of a contrastive corpus analysis, and produces systematic differences between truthful and deceptive personal accounts. Results suggest that deceivers employ a distancing strategy that is often associated with deceptive linguistic behavior. Ultimately, these deceivers struggle to adopt a truth perspective. Perhaps of most importance, our results indicate issues of concern with current deception detection theory and methodology. From a theoretical standpoint, our results question whether deceivers are deceiving at all or whether they are merely poorly expressing a rhetorical position, caused by being forced to speculate on a perceived proto-typical position. From a methodological standpoint, our results cause us to question the validity of deception corpora. Consequently, we propose new rigorous standards so as to better understand the subject matter of the deception field. Finally, we question the prevailing approach of abstract data measurement and call for future assessment to consider contextual lexical features. We conclude by suggesting a prudent approach to future research for fear that our eagerness to analyze and theorize may cause us to misidentify deception. After-all, successful deception, which is the kind we seek to detect, is likely to be an elusive and fickle prey.


A Pruning Based Approach for Scalable Entity Coreference

Song, Dezhao (Lehigh University) | Heflin, Jeff (Lehigh University)

AAAI Conferences

Entity coreference is the process to decide which identifiers (e.g., person names, locations, ontology instances, etc.) refer to the same real world entity. In the Semantic Web, entity coreference can be used to detect equivalence relationships between heterogeneous Semantic Web datasets to explicitly link coreferent ontology instances via the owl:sameAs property. Due to the large scale of Semantic Web data today, we propose two pruning techniques for scalably detecting owl:sameAs links between ontology instances by comparing the similarity of their context graphs. First, a sampling based technique is designed to estimate the potential contribution of each RDF node in the context graph and prune insignificant context. Furthermore, a utility function is defined to reduce the cost of performing such estimations. We evaluate our pruning techniques on three Semantic Web instance categories. We show that the pruning techniques enable the entity coreference system to run 10 to 35 times faster than without them while still maintaining comparably good F1-scores.


Ant Hunt: Towards a Validated Model of Live Ant Hunting Behavior

Yang, Yu-Ting (Georgia Institute of Technology) | Quitmeyer, Andrew (Georgia Institute of Technology) | Hrolenok, Brian (Georgia Institute of Technology) | Shang, Harry (Georgia Institute of Technology) | Nguyen, Dinh Bao (Georgia Institute of Technology) | Balch, Tucker (Georgia Institute of Technology) | Medina, Terrance (University of Georgia) | Sherer, Cole (University of Georgia) | Hybinette, Maria (University of Georgia)

AAAI Conferences

Biologists seek concise, testable models of behavior for the animals they study. We suggest a robot programming paradigm in which animal behaviors are described as robot controllers to support a cycle of hypothesis generation and testing of animal models. In this work we illustrate that approach by modeling the hunting behavior of a captive colony of Aphaenogaster cockerelli , a desert harvester ant. In laboratory animal experiments we introduce live prey (fruit flies) into the foraging arena of the colony. We observe the behavior of the ants, and we measure aspects of their performance in capturing the prey. Based on these observations we create a model of their behavior using Clay, a Java library developed for coding hybrid controllers in a behavior-based manner. We then validate that model in quantitative comparisons with the live animal behavior.


Measuring Semantic Similarity in Short Texts through Greedy Pairing and Word Semantics

Lintean, Mihai (University of Memphis) | Rus, Vasile (University of Memphis)

AAAI Conferences

We propose in this paper a greedy method to the problem of measuring semantic similarity between short texts. Our method is based on the principle of compositionality which states that the overall meaning of a sentence can be captured by summing up the meaning of its parts, i.e. the meanings of words in our case. Based on this principle, we extend word-to-word semantic similarity metrics to quantify the semantic similarity at sentence level. We report results using several word-to-word semantic similarity metrics, based on word knowledge or vectorial representations of meaning. Our approach performs better than similar approaches on the tasks of paraphrase identification and recognizing textual entailment, which are two illustrative semantic similarity tasks. We also report the role of word weighting and of function words on the performance of the proposed method.


Conditional Objects Revisited: Variants and Model Translations

Beierle, Christoph (Fern University, Hagen) | Kern-Isberner, Gabriele (Technical University Dortmund)

AAAI Conferences

The quality criteria of system P have been guiding qualitative uncertain reasoning now for more than two decades. Different semantical approaches have been presented to provide semantics for system P. The aim of the present paper is to investigate the semantical structures underlying system P in more detail, namely, on the level of the models. In particular, we focus on the approach via conditional objects which relies on Boolean intervals, without making any use of qualitative or quantitative information. Indeed, our studies confirm the singular position of conditional objects, but we are also able to establish semantical relationships via novel variants of model theories.


Emotion Oriented Programming: Computational Abstractions for AI Problem Solving

Darty, Kévin (Universit&eacute) | Sabouret, Nicolas (Pierre et Marie CURIE (UPMC))

AAAI Conferences

In this paper, we present a programming paradigm for AI problem solving based on computational concepts drawn from Affective Computing. It is believed that emotions participate in human adaptability and reactivity, in behaviour selection and in complex and dynamic environments. We propose to define a mechanism inspired from this observation for general AI problem solving. To this purpose, we synthesize emotions as programming abstractions that represent the perception of the environment's state w.r.t. predefined heuristics such as goal distance, action capability,etc. We first describe the general architecture of this "emotion-oriented" programming model. We define the vocabulary that allows programmers to describe the problem to be solved (i.e. the environment), and the action selection function based on emotion abstractions (i.e. the agent's behaviours). We then present the runtime algorithm that builds emotions out of the environment, stores them in the agent's memory, and selects behaviours accordingly. We present the implementation of a classical labyrinth problem solver in this model. We show that the solutions obtained by this easy-to-implement emotion-oriented program are of good quality while having a reduced computational cost.


Learning Artifact Capabilities Via a Hybrid Ontology

Mokom, Felicitas (University of Windsor) | Kobti, Ziad (University of Windsor)

AAAI Conferences

Artifact capabilities can play an important role in understanding human cognition. Over time humans learn to use artifacts, evolve the knowledge and combine acquired capabilities with others to form complex capabilities. In this study we present a hybrid ontology of artifacts to facilitate learning artifact capabilities. We develop a framework where agents simultaneously exploit a centralized artifact ontology in the environment and a distributed artifact ontology local to each agent. We demonstrate how both ontologies can be used by agents both in the artifact selection process and in learning artifact use. The local ontology serves as domain knowledge gained by the agent as it learns. We illustrate an example to show how an acquired artifact capability can be stored in an agent's local ontology for future use.


Genetic Algorithms with Lego Mindstorms and Matlab

Klassner, Frank (Villanova University) | Peyton-Jones, James (Villanova University) | Lehmer, Kurt (Villanova University)

AAAI Conferences

This paper presents a case study in combining Lego Mindstorms NXT with Matlab/Simulink to help students in an undergraduate Machine Learning course study genetic algorithm design and testing. The project uses the VU-LRT toolbox to enable students to access the hardware capabilities of the Mindstorms platform from within Matlab. The course's enrollment was comprised of students from several majors with a variety of programming backgrounds. The course is part of an interdisciplinary cognitive science concentration. We report on the VU-LRT toolbox, the considerations imposed by the diversity of the student population on the design of the laboratory module and student evaluations of the laboratory module.