A CASE-BASED MODEL OF CREATIVITY SCOT'[" R. TURNER Department of Computer Science University of California, Los Angeles Los Angeles CA 90024 USA Abstract Creativity - creating new solutions to problems - is an integral part of the problem-solving process. This paper presents a cognitive model of creativity in which a case-based problem-solver is augmented with (1) a creative drive and (2) a set of creativity heuristics. New solutions are discovered by solving a slightly different problem and adapting that solution to the original problem. By repeating this process, a creative problem-solver can discover new solutions that are novel, useful and very different from known solutions. This model has been implemented in a computer program called MINSTREL. MINSTREL has been used for planning and problem-solving, to tell stories, and to invent mechanical devices. 1 Introduction Creativity is an important element of human cognition. We all invent on a daily basis: we fix cars using spare change and bailing wire, invent jokes based on the latest domestic crisis, and make up bedtime stories for our children. The ability to invent original, useful solutions to problems is a fundamental process of human thought. To understand human cognition, we must understand the processes of creativity: the goals that drive people to create and the mechanisms they use to invent novel and useful solutions to their problems. This paper presents a model of creative reasoning as an extension to case-based problem-solving.
Research in Psychology often involves the building of computational models to test out various theories. The usual approach is to build models using the most convenient tool available. Newell has instead proposed building models within the framework of general-purpose cognitive architectures. One advantage of this approach is that in some cases it is possible to provide more perspicuous explanations of experimental results in different but related tasks, as emerging from an underlying architecture. In this paper, we propose the use of a bimodal cognitive architecture called biSoar in modeling phenomena in spatial representation and reasoning. We show biSoar can provide an architectural explanation for the phenomena of simplification that arises in experiments associated with spatial recall. We build a biSoar model for one such spatial recall task - wayfinding, and discuss the role of the architecture in the emergence of simplification.
Modeling crowd behavior is an important challenge for cognitive modelers. Models of crowd behavior facilitate analysis and prediction of the behavior of groups of people, who are in close geographical or logical states, and that are affected by each other's presence and actions. Existing models of crowd behavior, in a variety of fields, leave many open challenges. In particular, psychological models often offer only qualitative description, and do not easily permit algorithmic replication, while computer science models are often simplistic, treating agents as simple deterministic particles. We propose a novel model of crowd behavior, based on Festinger's Social Comparison Theory (SCT), a social psychology theory known and expanded since the early 1950's. We propose a concrete algorithmic framework for SCT, and evaluate its implementations in several crowd behavior scenarios. We show that our SCT model produces improved results compared to base models from the literature. We also discuss an implementation of SCT in the Soar cognitive architecture, and the question this implementation raises as to the role of social reasoning in cognitive architectures.
This post is the second in a series of three posts, each of which discuss the fundamental concepts of Artificial Intelligence. In our first post we discussed AI definitions, helping our readers to understand the basic concepts behind AI, giving them the tools required to sift through the many AI articles out there and form their own opinion. In this second post, we will discuss several notions which are important in understanding the limits of AI. Figure 1: How intelligent can Artificial Intelligence get? When we speak about how far AI can go, there are two "philosophies": strong AI and weak AI. The most commonly followed philosophy is that of weak AI, which means that machines can manifest certain intelligent behavior to solve specific (hard) tasks, but that they will never equal the human mind.
This article discusses building a computable design process model, which is a prerequisite for realizing intelligent computer-aided design systems. First, we introduce general design theory, from which a descriptive model of design processes is derived. In this model, the concept of metamodels plays a crucial role in describing the evolutionary nature of design. Second, we show a cognitive design process model obtained by observing design processes using a protocol analysis method. We then discuss a computable model that can explain most parts of the cognitive model and also interpret the descriptive model. In the computable model, a design process is regarded as an iterative logical process realized by abduction, deduction, and circumscription. We implemented a design simulator that can trace design processes in which design specifications and design solutions are gradually revised as the design proceeds.