The paper introduces mixed networks, a new framework for expressing and reasoning with probabilistic and deterministic information. The framework combines belief networks with constraint networks, defining the semantics and graphical representation. We also introduce the AND/OR search space for graphical models, and develop a new linear space search algorithm. This provides the basis for understanding the benefits of processing the constraint information separately, resulting in the pruning of the search space. When the constraint part is tractable or has a small number of solutions, using the mixed representation can be exponentially more effective than using pure belief networks which odel constraints as conditional probability tables.
Tachmazidis, Ilias (Foundation for Research and Technology - Hellas and University of Crete) | Antoniou, Grigoris (University of Huddersfield and Foundation for Research and Technology - Hella) | Flouris, Giorgos (Foundation for Research and Technology - Hellas) | Kotoulas, Spyros (IBM Research)
We are witnessing an explosion of available data from the Web, government authorities, scientific databases, sensors and more. Such datasets could benefit from the introduction of rule sets encoding commonly accepted rules or facts, application- or domain-specific rules, commonsense knowledge etc. This raises the question of whether, how, and to what extent knowledge representation methods are capable of handling the vast amounts of data for these applications. In this paper, we consider nonmonotonic reasoning, which has traditionally focused on rich knowledge structures. In particular, we consider defeasible logic, and analyze how parallelization, using the MapReduce framework, can be used to reason with defeasible rules over huge data sets. Our experimental results demonstrate that defeasible reasoning with billions of data is performant, and has the potential to scale to trillions of facts.
Authoring narrative content for interactive digital media can be both difficult and time consuming.The research proposed here aims at enhancing the capabilities of content creators through the development of a computational model that improves the quality of automatically generated stories, potentially decreasing the burden placed on the author. The quality and believability of a story can be significantly enhanced by the presence of compelling characters. To achieve this objective, I aim to develop a choice-based computational model that facilitates the automatic generation of narrative that includes characters that are made more compelling by the presence of distinct personality characteristics.
This article discusses building a computable design process model, which is a prerequisite for realizing intelligent computer-aided design systems. First, we introduce general design theory, from which a descriptive model of design processes is derived. In this model, the concept of metamodels plays a crucial role in describing the evolutionary nature of design. Second, we show a cognitive design process model obtained by observing design processes using a protocol analysis method. We then discuss a computable model that can explain most parts of the cognitive model and also interpret the descriptive model. In the computable model, a design process is regarded as an iterative logical process realized by abduction, deduction, and circumscription. We implemented a design simulator that can trace design processes in which design specifications and design solutions are gradually revised as the design proceeds.
Modeling crowd behavior is an important challenge for cognitive modelers. Models of crowd behavior facilitate analysis and prediction of the behavior of groups of people, who are in close geographical or logical states, and that are affected by each other's presence and actions. Existing models of crowd behavior, in a variety of fields, leave many open challenges. In particular, psychological models often offer only qualitative description, and do not easily permit algorithmic replication, while computer science models are often simplistic, treating agents as simple deterministic particles. We propose a novel model of crowd behavior, based on Festinger's Social Comparison Theory (SCT), a social psychology theory known and expanded since the early 1950's. We propose a concrete algorithmic framework for SCT, and evaluate its implementations in several crowd behavior scenarios. We show that our SCT model produces improved results compared to base models from the literature. We also discuss an implementation of SCT in the Soar cognitive architecture, and the question this implementation raises as to the role of social reasoning in cognitive architectures.