Goto

Collaborating Authors

 Country


A Preliminary Analysis and Catalog of Thematic Labels

AAAI Conferences

An account of the labels commonly used to express themes could both help in assessing the coverage of models of narrative processing, and support recognizing themes by the textual appearance of these labels. This paper presents a preliminary analysis and catalog of thematic labels such as “vicious cycle” and “underdog”. In contrast to a top-down approach characterizing themes in terms of components of a model of narrative processing, a bottom-up approach is taken. Thematic labels are gathered independent of any particular model and they are catalogued according to the types of relationships the corresponding themes convey.


A Framework to Induce Self-Regulation Through a Metacognitive Tutor

AAAI Conferences

A new architectural framework for a metacognitive tutoring system is presented that is aimed to stimulate self-regulatory behavior in the learner.The new framework extends the cognitive architecture of TutorJ that has been already proposed by some of the authors. TutorJ relies mainly on dialogic interaction with the user, and makes use of a statistical dialogue planner implemented through a Partially Observable Markov Decision Process (POMDP). A suitable two-level structure has been designed for the statistical reasoner to cope with measuring and stimulating metacognitive skills in the user. Suitable actions have been designed to this purpose starting from the analysis of the main questionnaires proposed in the literature. Our reasoner has been designed to model the relation between each item in a questionnaire and the related metacognitive skill, so the proper action can be selected by the tutoring agent. The complete framework is detailed, the reasoner structure is discussed, and a simple application scenario is presented.


Quantificational Sharpening of Commonsense Knowledge

AAAI Conferences

The KNEXT system produces a large volume of factoids from text, expressing possibilistic general claims such as that 'A PERSON MAY HAVE A HEAD' or 'PEOPLE MAY SAY SOMETHING'. We present a rule-based method to sharpen certain classes of factoids into stronger, quantified claims such as 'ALL OR MOST PERSONS HAVE A HEAD' or 'ALL OR MOST PERSONS AT LEAST OCCASIONALLY SAY SOMETHING' -- statements strong enough to be used for inference. The judgement of whether and how to sharpen a factoid depends on the semantic categories of the terms involved and the strength of the quantifier depends on how strongly the subject is associated with what is predicated of it. We provide an initial assessment of the quality of such automatic strengthening of knowledge and examples of reasoning with multiple sharpened premises.


Social Issues in the Understanding of Narrative

AAAI Conferences

This paper proposes a number of social issues that are essential in understanding any given story, and thus, that must be included in a comprehensive approach to computational modeling of narrative. It focuses on oral narratives, and on the social event of the telling of a story. For participants in the telling, the central social issue is the story’s evaluation or meaning: the point or moral of the story. Value or meaning is created relative to social membership, and so, to understand evaluation, it is not sufficient to understand a story solely as a bounded unit. Therefore, this paper examines the ways in which narrative meaning is negotiated between narrator and interlocutors. It demonstrates how a given story can take on different meanings for different audiences. The life course of a story is also proposed as relevant dimension for understanding. Ephemeral stories are distinguished from stories which have multiple tellings, both for the stories of individuals, and for stories which form part of the story stock of institutions. Storytelling rights are also considered: who within a group has the right to tell a particular story on a particular occasion. These issues are proposed as potential meta-data to be used in the analysis of stories. Finally, the paper indicates an area in which computational understanding of narrative, including these social issues, has potential for practical applications: as part of current commercial knowledge capture and archiving activities.


Detecting Ontological Conflicts in Protocols between Semantic Web Services

arXiv.org Artificial Intelligence

The task of verifying the compatibility between interacting web services has traditionally been limited to checking the compatibility of the interaction protocol in terms of message sequences and the type of data being exchanged. Since web services are developed largely in an uncoordinated way, different services often use independently developed ontologies for the same domain instead of adhering to a single ontology as standard. In this work we investigate the approaches that can be taken by the server to verify the possibility to reach a state with semantically inconsistent results during the execution of a protocol with a client, if the client ontology is published. Often database is used to store the actual data along with the ontologies instead of storing the actual data as a part of the ontology description. It is important to observe that at the current state of the database the semantic conflict state may not be reached even if the verification done by the server indicates the possibility of reaching a conflict state. A relational algebra based decision procedure is also developed to incorporate the current state of the client and the server databases in the overall verification procedure.


Reasoning about Cardinal Directions between Extended Objects: The Hardness Result

arXiv.org Artificial Intelligence

The cardinal direction calculus (CDC) proposed by Goyal and Egenhofer is a very expressive qualitative calculus for directional information of extended objects. Early work has shown that consistency checking of complete networks of basic CDC constraints is tractable while reasoning with the CDC in general is NP-hard. This paper shows, however, if allowing some constraints unspecified, then consistency checking of possibly incomplete networks of basic CDC constraints is already intractable. This draws a sharp boundary between the tractable and intractable subclasses of the CDC. The result is achieved by a reduction from the well-known 3-SAT problem.


The Lasso under Heteroscedasticity

arXiv.org Machine Learning

The performance of the Lasso is well understood under the assumptions of the standard linear model with homoscedastic noise. However, in several applications, the standard model does not describe the important features of the data. This paper examines how the Lasso performs on a non-standard model that is motivated by medical imaging applications. In these applications, the variance of the noise scales linearly with the expectation of the observation. Like all heteroscedastic models, the noise terms in this Poisson-like model are \textit{not} independent of the design matrix. More specifically, this paper studies the sign consistency of the Lasso under a sparse Poisson-like model. In addition to studying sufficient conditions for the sign consistency of the Lasso estimate, this paper also gives necessary conditions for sign consistency. Both sets of conditions are comparable to results for the homoscedastic model, showing that when a measure of the signal to noise ratio is large, the Lasso performs well on both Poisson-like data and homoscedastic data. Simulations reveal that the Lasso performs equally well in terms of model selection performance on both Poisson-like data and homoscedastic data (with properly scaled noise variance), across a range of parameterizations. Taken as a whole, these results suggest that the Lasso is robust to the Poisson-like heteroscedastic noise.


Significance of Classification Techniques in Prediction of Learning Disabilities

arXiv.org Artificial Intelligence

The aim of this study is to show the importance of two classification techniques, viz. decision tree and clustering, in prediction of learning disabilities (LD) of school-age children. LDs affect about 10 percent of all children enrolled in schools. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. Decision trees and clustering are powerful and popular tools used for classification and prediction in Data mining. Different rules extracted from the decision tree are used for prediction of learning disabilities. Clustering is the assignment of a set of observations into subsets, called clusters, which are useful in finding the different signs and symptoms (attributes) present in the LD affected child. In this paper, J48 algorithm is used for constructing the decision tree and K-means algorithm is used for creating the clusters. By applying these classification techniques, LD in any child can be identified.


Random Graph Generator for Bipartite Networks Modeling

arXiv.org Artificial Intelligence

The purpose of this article is to introduce a new iterative algorithm with properties resembling real life bipartite graphs. The algorithm enables us to generate wide range of random bigraphs, which features are determined by a set of parameters.We adapt the advances of last decade in unipartite complex networks modeling to the bigraph setting. This data structure can be observed in several situations. However, only a few datasets are freely available to test the algorithms (e.g. community detection, influential nodes identification, information retrieval) which operate on such data. Therefore, artificial datasets are needed to enhance development and testing of the algorithms. We are particularly interested in applying the generator to the analysis of recommender systems. Therefore, we focus on two characteristics that, besides simple statistics, are in our opinion responsible for the performance of neighborhood based collaborative filtering algorithms. The features are node degree distribution and local clustering coeficient.


CUR from a Sparse Optimization Viewpoint

arXiv.org Machine Learning

The CUR decomposition provides an approximation of a matrix $X$ that has low reconstruction error and that is sparse in the sense that the resulting approximation lies in the span of only a few columns of $X$. In this regard, it appears to be similar to many sparse PCA methods. However, CUR takes a randomized algorithmic approach, whereas most sparse PCA methods are framed as convex optimization problems. In this paper, we try to understand CUR from a sparse optimization viewpoint. We show that CUR is implicitly optimizing a sparse regression objective and, furthermore, cannot be directly cast as a sparse PCA method. We also observe that the sparsity attained by CUR possesses an interesting structure, which leads us to formulate a sparse PCA method that achieves a CUR-like sparsity.