For humans and automation to effectively collaborate and perform tasks, all participants need access to a common representation of potentially relevant situational information, or context. This article describes a general framework for building context-aware interactive intelligent systems that comprises three major functions: (1) capture human-system interactions and infer implicit context; (2) analyze and predict user intent and goals; and (3) provide effective augmentation or mitigation strategies to improve performance, such as delivering timely, personalized information and recommendations, adjusting levels of automation, or adapting visualizations. Our goal is to develop an approach that enables humans to interact with automation more intuitively and naturally that is reusable across domains by modeling context and algorithms at a higher-level of abstraction. We first provide an operational definition of context and discuss challenges and opportunities for exploiting context. We then describe our current work towards a general platform that supports developing context-aware applications in a variety of domains. We then explore an example use case illustrating how our framework can facilitate personalized collaboration within an information management and decision support tool. Future work includes evaluating our framework.
This article presents new algorithms for inferring users’ activities in a class of flexible and open-ended educational software called exploratory learning environments (ELE). Such settings provide a rich educational environment for students, but challenge teachers to keep track of students’ progress and to assess their performance. This article presents techniques for recognizing students activities in ELEs and visualizing these activities to students. It describes a new plan recognition algorithm that takes into account repetition and interleaving of activities. This algorithm was evaluated empirically using two ELEs for teaching chemistry and statistics used by thousands of students in several countries. It was able to outperform the state-of-the-art plan recognition algorithms when compared to a gold-standard that was obtained by a domain-expert. We also show that visualizing students’ plans improves their performance on new problems when compared to an alternative visualization that consists of a step-by-step list of actions.
Modern multicore computers provide an opportunity to parallelize plan recognition algorithms to decrease runtime. Viewing plan recognition as parsing based on a complete breadth first search, makes ELEXIR (engine for lexicalized intent recognition) (Geib 2009; Geib and Goldman 2011) particularly suited for parallelization. This article documents the extension of ELEXIR to utilize such modern computing platforms. We will discuss multiple possible algorithms for distributing work between parallel threads and the associated performance wins. We will show, that the best of these algorithms provides close to linear speedup (up to a maximum number of processors), and that features of the problem domain have an impact on the achieved speedup.
Knowledge workers perform work on many tasks per day and often switch between tasks. When performing work on a task, a knowledge worker must typically search, navigate and dig through file systems, documents and emails, all of which introduce friction into the flow of work. This friction can be reduced, and productivity improved, by capturing and modeling the context of a knowledge worker’s task based on how the knowledge worker interacts with an information space. Captured task contexts can be used to facilitate switching between tasks, to focus a user interface on just the information needed by a task and to recommend potentially other useful information. We report on the use of task contexts and the effect of context on productivity for a particular kind of knowledge worker, software developers. We also report on qualitative findings of the use of task contexts by a more general population of knowledge workers.
Although Switzerland is a small country, it is home to many internationally renowned universities and scientific institutions. The research landscape in Switzerland is rich, and AI-related themes are investigated by many teams under diverse umbrellas. This column sheds some light on selected developments and trends on AI in Switzerland as perceived by members of the Special Interest group on Artificial Intelligence and Cognitive Science (SGAICO) organizational team, which has brought together researchers from Switzerland interested in AI and cognitive science for over 30 years.
We discuss the nature of big data and address the role of semantics in analyzing and processing big data that arises in the context of physical-cyber-social systems. To handle volume, we advocate semantic perception that can convert low-level observational data to higher-level abstractions more suitable for decision-making. To handle variety, we resort to semantic models and annotations of data so that intelligent processing can be done independent of heterogeneity of data formats and media. To handle velocity, we seek to use continuous semantics capability to dynamically create event or situation specific models and recognize relevant new concepts, entities and facts. To handle veracity, we explore trust models and approaches to glean trustworthiness. These four v's of big data are harnessed by the semantics-empowered analytics to derive value to support applications transcending physical-cyber-social continuum.