Dublin Institute of Technology
Sentiment Classification Using Negation as a Proxy for Negative Sentiment
Ohana, Bruno (Dublin Institute of Technology) | Tierney, Brendan (Dublin Institute of Technology) | Delany, Sarah Jane (Dublin Institute of Technology)
We explore the relationship between negated text and negative sentiment in the task of sentiment classification. We propose a novel adjustment factor based on negation occurrences as a proxy for negative sentiment that can be applied to lexicon-based classifiers equipped with a negation detection pre-processing step. We performed an experiment on a multi-domain customer reviews dataset obtaining accuracy improvements over a baseline, and we further improved our results using out-of-domain data to calibrate the adjustment factor. We see future work possibilities in exploring negation detection refinements, and expanding the experiment to a broader spectrum of opinionated discourse, beyond that of customer reviews.
Report on the 21st International Conference on Case-Based Reasoning
Ontanon, Santiago (Drexel University) | Delany, Sarah Jane (Dublin Institute of Technology) | Cheetham, William E. (Capital District Physicians')
In cooperation with the Association for the Advancement of Artificial Intelligence (AAAI), the twenty-first International Conference on Case-Based Reasoning (ICCBR), the premier international meeting on research and applications in Case-Based Reasoning (CBR), was held in July 2013 in Saratoga Springs, NY. ICCBR is the annual meeting of the CBR community and the leading conference on this topic. This year ICCBR featured the Industry Day, the fifth annual Doctoral Consortium and three workshops.
Report on the 21st International Conference on Case-Based Reasoning
Ontanon, Santiago (Drexel University) | Delany, Sarah Jane (Dublin Institute of Technology) | Cheetham, William E. (Capital District Physicians')
Springs, NY. ICCBR is the annual meeting of the CBR community and the ICCBR also featured a workshop program consisting of three workshops. The main conference track featured 16 research paper presentations, nine posters, and two invited speakers. The papers and posters reflected the state of the art of case-based reasoning, dealing both with open problems at the core of CBR (especially in similarity assessment, case adaptation, and case-based maintenance), as well as trending applications of CBR (especially recommender systems and computer games) and the intersections of CBR with other areas such as multiagent systems. The first invited speaker, Igor Jurisica from the Ontario Cancer Institute and the University of Toronto, spoke about how to scale up case-based reasoning for "big data" applications. The Case-Based Reasoning in Health Sciences workshop, organized by Isabelle Bichindaritz, Cindy Marling, and Stefania Montani, and the EXPPORT workshop (Experience Reuse: Provenance, Process-Orientation and Traces), organized by David Leake, Béatrice Fuchs, Juan A. Recio Garcia, and Stefania Montani, were held jointly and dealt with how to deal with data represented CDPHP, was the local chair; William E. University, and Jonathan Rubin, from Registration information is available at www.aaai.org/Symposia/ the Palo Alto Research Center, were the Spring/ sss14.php.
Towards a Cognitive System that Can Recognize Spatial Regions Based on Context
Hawes, Nick (University of Birmingham) | Klenk, Matthew (Palo Alto Research Center) | Lockwood, Kate (California State University, Monterey Bay) | Horn, Graham S. (University of Birmingham) | Kelleher, John D (Dublin Institute of Technology)
In order to collaborate with people in the real world, cognitive systems must be able to represent and reason about spatial regions in human environments. Consider the command "go to the front of the classroom". The spatial region mentioned (the front of the classroom) is not perceivable using geometry alone. Instead it is defined by its functional use, implied by nearby objects and their configuration. In this paper, we define such areas as context-dependent spatial regions and present a cognitive system able to learn them by combining qualitative spatial representations, semantic labels, and analogy. The system is capable of generating a collection of qualitative spatial representations describing the configuration of the entities it perceives in the world. It can then be taught context-dependent spatial regions using anchor pointsdefined on these representations. From this we then demonstrate how an existing computational model of analogy can be used to detect context-dependent spatial regions in previously unseen rooms. To evaluate this process we compare detected regions to annotations made on maps of real rooms by human volunteers.
Visual Salience and Reference Resolution in Situated Dialogues: A Corpus-based Evaluation
Schuette, Niels (Dublin Institute of Technology) | Kelleher, John (Dublin Institute of Technology) | Namee, Brian (Dublin Institute of Technology)
Dialogues between humans and robots are necessarily situated. Exophoric references to objects in the shared visual context are very frequent in situated dialogues, for example when a human is verbally guiding a tele-operated mobile robot. We present an approach to automatically resolving exophoric referring expressions in a situated dialogue based on the visual salience of possible referents. We evaluate the effectiveness of this approach and a range of different salience metrics using data from the SCARE corpus which we have augmented with visual information. The results of our evaluation show that our computationally lightweight approach is successful, and so promising for use in human-robot dialogue systems.
Situating Spatial Templates for Human-Robot Interaction
Kelleher, John (Dublin Institute of Technology) | Ross, Robert (Dublin Institute of Technology) | Namee, Brian Mac (Dublin Institute of Technology) | Sloan, Colm (Dublin Institute of Technology)
Through empirical validation and computational application, template-based models of situated spatial term meaning have proven their usefulness to human-robot dialogue, but we argue in this paper that important contextual features are being ignored; resulting in over-generalization and failure to account for actual usage in situated context. Such a fact is significant to human-robot dialogue in that it constrains the manner in which we create interactive systems which can discuss their own physical actions and surroundings. To this end, in this paper we describe a study which we conducted to determine how acceptability ratings for spatial term meaning altered for oblique landmark orientations. Results demonstrated that spatial term meaning was indeed altered by interlocutor perspective in a way not predicted by current approaches to spatial term semantics.
Putting Things in Context: Situated Language Understanding for Human-Robot Dialog(ue)
Ross, Robert (Dublin Institute of Technology)
In this paper we present a model of language contextualization for spatially situated dialogue systems including service robots. The contextualization model addresses the problem of location sensitivity in language understanding for human-robot interaction. Our model is based on the application of situation-sensitive contextualization functions to a dialogue move's semantic roles — both for the resolution of specified content and the augmentation of empty roles in cases of ellipsis. Unlike the previous use of default values, this methodology provides a context-dependent discourse process which reduces unnecessary artificial clarificatory statements. We detail this model and report on a number of user studies conducted with a simulated robotic system based on this model.
Is Silence Golden in Human-Robot Dialogue?
Ross, Robert (Dublin Institute of Technology)
The physical actions performed by any robot can be used to convey meaning to a user in human-robot interaction. While the analysis of physical actions as communicative acts is not new, it is less clear how dialogue planning policies for human-robot interaction should be influenced by the co-occurrence of physical tasks actions. In this short paper we report on a study which analyses the relative importance of omitting verbal feedback in situated human-robot dialogue. Results indicate that while a lack of explicit feedback can and does lead to more errors in dialogue, overall task performance times are improved, while users perceive the resultant system as better performing on a number of subjective measures.