Modern multicore computers provide an opportunity to parallelize plan recognition algorithms to decrease runtime. Viewing plan recognition as parsing based on a complete breadth first search, makes ELEXIR (engine for lexicalized intent recognition) (Geib 2009; Geib and Goldman 2011) particularly suited for parallelization. This article documents the extension of ELEXIR to utilize such modern computing platforms. We will discuss multiple possible algorithms for distributing work between parallel threads and the associated performance wins. We will show, that the best of these algorithms provides close to linear speedup (up to a maximum number of processors), and that features of the problem domain have an impact on the achieved speedup.
Although Switzerland is a small country, it is home to many internationally renowned universities and scientific institutions. The research landscape in Switzerland is rich, and AI-related themes are investigated by many teams under diverse umbrellas. This column sheds some light on selected developments and trends on AI in Switzerland as perceived by members of the Special Interest group on Artificial Intelligence and Cognitive Science (SGAICO) organizational team, which has brought together researchers from Switzerland interested in AI and cognitive science for over 30 years.
Renz, Jochen (The Australian National University) | Ge, Xiaoyu (The Australian National University) | Gould, Stephen (The Australian National University) | Zhang, Peng (The Australian National University)
The aim of the Angry Birds AI competition (AIBIRDS) is to build intelligent agents that can play new Angry Birds levels better than the best human players. This is surprisingly difficult for AI as it requires similar capabilities to what intelligent systems need for successfully interacting with the physical world, one of the grand challenges of AI. As such the competition offers a simplified and controlled environment for developing and testing the necessary AI technologies, a seamless integration of computer vision, machine learning, knowledge representation and reasoning, reasoning under uncertainty, planning, and heuristic search, among others. Over the past three years there have been significant improvements, but we are still a long way from reaching the ultimate aim and, thus, there are great opportunities for participants in this competition.
Although a number of initiatives provide personalized context-aware guidance for niche use-cases, a standard framework for context awareness remains lacking. This article explains how semantic technology has been exploited to generate a centralized repository of personal activity context. This data drives advanced features such as, personal situation recognition and customizable rules for the context-sensitive management of personal devices and data sharing. As a proof-of-concept, we demonstrate how an innovative context-aware system has successfully adopted such an infrastructure.
This article presents new algorithms for inferring users’ activities in a class of flexible and open-ended educational software called exploratory learning environments (ELE). Such settings provide a rich educational environment for students, but challenge teachers to keep track of students’ progress and to assess their performance. This article presents techniques for recognizing students activities in ELEs and visualizing these activities to students. It describes a new plan recognition algorithm that takes into account repetition and interleaving of activities. This algorithm was evaluated empirically using two ELEs for teaching chemistry and statistics used by thousands of students in several countries. It was able to outperform the state-of-the-art plan recognition algorithms when compared to a gold-standard that was obtained by a domain-expert. We also show that visualizing students’ plans improves their performance on new problems when compared to an alternative visualization that consists of a step-by-step list of actions.
Big data is having a disruptive impact across the sciences. Human annotation of semantic interpretation tasks is a critical part of big data semantics, but it is based on an antiquated ideal of a single correct truth that needs to be similarly disrupted. We expose seven myths about human annotation, most of which derive from that antiquated ideal of truth, and dispell these myths with examples from our research. We propose a new theory of truth, crowd truth, that is based on the intuition that human interpretation is subjective, and that measuring annotations on the same objects of interpretation (in our examples, sentences) across a crowd will provide a useful representation of their subjectivity and the range of reasonable interpretations.
There is a great deal of interest in big data, focusing mostly on dataset size. An equally important dimension of big data is variety, where the focus is to process highly heterogeneous datasets. We describe how we use semantics to address the problem of big data variety. We also describe Karma, a system that implements our approach and show how Karma can be applied to integrate data in the cultural heritage domain. In this use case, Karma integrates data across many museums even though the datasets from different museums are highly heterogeneous.