Goto

Collaborating Authors

AAAI Conferences


Linares López

AAAI Conferences

In this paper we propose a new algorithm for solving general two-player turn-taking games that performs symbolic search utilizing binary decision diagrams (BDDs). It consists of two stages: First, it determines all breadth-first search (BFS) layers using forward search and omitting duplicate detection, next, the solving process operates in backward direction only within these BFS layers thereby partitioning all BDDs according to the layers the states reside in. We provide experimental results for selected games and compare to a previous approach. This comparison shows that in most cases the new algorithm outperforms the existing one in terms of runtime and used memory so that it can solve games that could not be solved before with a general approach.


AAAI Inc., A Nonprofit California Corporation

AAAI Conferences

AAAI advances the understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.


Submissions

AAAI Conferences

The purpose of AI Magazine is to disseminate timely and informative articles that represent the current state of the art in AI and to keep its readers posted on AAAI-related matters. The articles are selected for appeal to readers engaged in research and


Measuring Presence and Performance in a Virtual Reality Police Use of Force Training Simulation Prototype

AAAI Conferences

In this paper we describe a virtual reality training simulation designed to help police officers learn use of force policies. Our goal is to test a training simulation prototype by measuring improvements to presence and performance. If successful, this can lead to creating a full-scale virtual reality narrative training simulation. The simulation uses a planner-based experience manager to determine the actions of agents other than the participant. Participants' actions were logged, physiological data was recorded, and the participants filled out questionnaires. Player knowledge attributes were authored to measure participants' understanding of teaching materials. We demonstrate that when participants interact with the simulation using virtual reality they experience greater presence than when using traditional screen and keyboard controls. We also demonstrate that participants' performance improves over repeated sessions.


Antonucci

AAAI Conferences

We focus on the problem of modeling deterministic equations over continuous variables in discrete Bayesian networks. This is typically achieved by a discretization of both input and output variables and a degenerate quantification of the corresponding conditional probability tables. This approach, based on classical probabilities, cannot properly model the information loss induced by the discretization. We show that a reliable modeling of such epistemic uncertainty can be instead achieved by credal sets, i.e., convex sets of probability mass functions. This transforms the original Bayesian network in a credal network, possibly returning interval-valued inferences, that are robust with respect to the information loss induced by the discretisation. Algorithmic strategies for an optimal choice of the discretisation bins are also provided.


Using Linguistic Context to Learn Folksonomies from Task-Oriented Dialogues

AAAI Conferences

Dialogue systems intend to facilitate the interaction between humans and computers. A key element in a dialogue system is the conceptual model which represents a domain. Folksonomies are very simple forms of knowledge representation which may be used to specify the conceptual model. However, folksonomies suffer by nature from issues related to ambiguity. In this paper, we present a method which uses linguistic context for learning folksonomies from task-oriented dialogues. The linguistic context can be useful for reducing ambiguity, for instance, when using the folksonomies for interpreting utterances. Experiments show that the learned folksonomies increase the accuracy of the interpretation compared when not using the contextual information.


Silva

AAAI Conferences

Semantic labeling of texts allows people and computing devices to more easily understand the meaning of a natural language sentence as a whole. It is very often one of the steps taken of procedures related to natural language processing. However, this step is often done manually, which is very expensive and time-consuming. When automatic labeling systems are employed, methods such as maximum entropy models are used, which receive as input features specified by specialists that also make the development of the system more expensive. In this article we present a model of the deep recurrent network that semantically annotates texts in English using as labels the top categories of an ontology. The tests showed that it is possible to obtain better results than the models that need the features to be made explicit.


Adaptation of Multivariate Concept to Multi-Way Agglomerative Clustering for Hierarchical Aspect Aggregation

AAAI Conferences

Hierarchical review aspect aggregation is an important challenge in review summarization. Currently, agglomerative clustering is widely used for hierarchical aspect aggregation. We identify an important but less studied issue in using agglomerative clustering for the aforementioned task. This paper proposes a novel approach to generate a multi-way hierarchy by adaptation of the multivariate concept. Furthermore, we propose a novel experimentation approach to evaluate the acceptability of the aspect relations obtained from the hierarchy generated.


Flexible Approach for Computer-Assisted Reading and Analysis of Texts

AAAI Conferences

A Computer-Assisted Reading and Analysis of Texts (CARAT) process is a complex technology that connects language, text, information and knowledge theories with computational formalizations, statistical approaches, symbolic approaches, standard and non-standard logics, etc. This process should be, always, under the control of the user according to his subjectivity, his knowledge and the purpose of his analysis. It becomes important to design platforms to support the design of CARAT tools, their management, their adaptation to new needs and the experiments. Even, in the last years, several platforms for digging data, including textual data have emerged; they lack flexibility and sound formal foundations. We propose, in this paper, a formal model with strong logical foundations, based on typed applicative systems.


Gautam

AAAI Conferences

Semantic similarity is a major automated approach to address many tasks such as essay grading, answer assessment, text summarization and information retrieval. Many semantic similarity methods rely on semantic representation such as Latent Semantic Analysis (LSA), an unsupervised method to infer a vectorial semantic representation of words or larger texts such as documents. Two ingredients in obtaining LSA vectorial representations are the corpus of texts from which the vectors are derived and the dimensionality of the resulting space. In this work, we investigate the effect of corpus size and vector dimensionality on assessing student generated content in advanced learning systems, namely, virtual internships. Automating the assessment of student generated content would greatly increase the scalability of virtual internships to millions of learners at reasonable costs. Prior work on automated assessment of notebook entries relied on classifiers trained on participant data. However, when new virtual internships are created for a new domain, for instance, no participant data is available a priori. Here, we report on our effort to develop a LSA-based assessment method without student data. Furthermore, we investigate the optimum corpus size and vector dimensionality for these LSA-based methods.