Silverman, Barry G.


Research Workshop on Expert Judgment, Human Error, and Intelligent Systems

AI Magazine

This workshop brought together 20 computer scientists, psychologists, and human-computer interaction (HCI) researchers to exchange results and views on human error and judgment bias. Human error is typically studied when operators undertake actions, but judgment bias is an issue in thinking rather than acting. Both topics are generally ignored by the HCI community, which is interested in designs that eliminate human error and bias tendencies. As a result, almost no one at the workshop had met before, and the discussion for most participants was novel and lively. Many areas of previously unexamined overlap were identified. An agenda of research needs was also developed.


Expert Critics in Engineering Design: Lessons Learned and Research Needs

AI Magazine

Human error is an Criticism should not be querulous, and umes of fast-changing increasingly important wasting, all knife and root puller, but guiding, sensory data that and addressable instructive, inspiring, a South wind, one needs to process concern in modernday not an East wind. Most institutions), and the automation that technology represents accidents waiting to surrounds us (for example, unfriendly computers happen. For example, in the Challenger explosion, We get by because humans excel at coping. the shortcomings of the O-rings had been High-technology accidents occur because known for several years. What feedback hundreds of alarms simultaneously all contributed strategy (for example, story telling, first-principle to the disaster. Likewise, when the lecturing) will most constructively correct British fleet was sent to defend the Falkland the human error? It was at this differences. However, there are no point that the Argentines released their missile models there or in the AI literature of errors and sank an unsuspecting British ship. The operator had The errors result from proficient task performers no inkling of the ramifications of the system practicing in a natural environment; they designs under the current operating conditions. New error and critiquing models operator has virtually no way out. The remarkable need to capture and reflect this difference. We computer-aided design (ICAD) to mitigate begin by examining the design process and such problems. Specifically, we examine the the cognitive difficulties it poses. The designer uses a interference problems are also increasingly variety of cognitive operators to generate a evident on civilian automobiles, airplanes, design, test it under various conditions, refine and ships that cram telephones, radios, computers, it until a stopping rule is reached, and then radar devices, and other electromagnetically store the design as a prototype or analog to incompatible devices into close help start a new process for the next design proximity. The design process is sufficiently complex domain are relevant to all engineering design that a correct and complete design applications that must factor any operational simply cannot be deduced from starting conditions (or manufacturability, sales, or other downstream) or simulation model results.


Full-Sized Knowledge-Based Systems Research Workshop

AI Magazine

The Full-Sized Knowledge-Based Systems Research Workshop was held May 7-8, 1990 in Washington, D.C., as part of the AI Systems in Government Conference sponsored by IEEE Computer Society, Mitre Corporation and George Washington University in cooperation with AAAI. The goal of the workshop was to convene an international group of researchers and practitioners to share insights into the problems of building and deploying Full-Sized Knowledge Based Systems (FSKBSs).


Full-Sized Knowledge-Based Systems Research Workshop

AI Magazine

The Full-Sized Knowledge-Based Systems Research Workshop was held May 7-8, 1990 in Washington, D.C., as part of the AI Systems in Government Conference sponsored by IEEE Computer Society, Mitre Corporation and George Washington University in cooperation with AAAI. The goal of the workshop was to convene an international group of researchers and practitioners to share insights into the problems of building and deploying Full-Sized Knowledge Based Systems (FSKBSs).


Critiquing Human Judgment Using Knowledge-Acquisition Systems

AI Magazine

Automated knowledge-acquisition systems have focused on embedding a cognitive model of a key knowledge worker in their software that allows the system to acquire a knowledge base by interviewing domain experts just as the knowledge worker would. Two sets of research questions arise: (1) What theories, strategies, and approaches will let the modeling process be facilitated; accelerated; and, possibly, automated? If automated knowledge-acquisition systems reduce the bottleneck associated with acquiring knowledge bases, how can the bottleneck of building the automated knowledge-acquisition system itself be broken? How can an automated system critique and influence such biases in a positive fashion, what common patterns exist across applications, and can models of influencing behavior be described and standardized?


Critiquing Human Judgment Using Knowledge-Acquisition Systems

AI Magazine

Automated knowledge-acquisition systems have focused on embedding a cognitive model of a key knowledge worker in their software that allows the system to acquire a knowledge base by interviewing domain experts just as the knowledge worker would. Two sets of research questions arise: (1) What theories, strategies, and approaches will let the modeling process be facilitated; accelerated; and, possibly, automated? If automated knowledge-acquisition systems reduce the bottleneck associated with acquiring knowledge bases, how can the bottleneck of building the automated knowledge-acquisition system itself be broken? (2) If the automated knowledge-acquisition system centers on having an effective cognitive model of the key knowledge worker(s), to what extent does this model account for and attempt to influence human bias in knowledge base rule generation? That is, humans are known to be subject to errors and cognitive biases in their judgment processes. How can an automated system critique and influence such biases in a positive fashion, what common patterns exist across applications, and can models of influencing behavior be described and standardized? This article answers these research questions by presenting several prototypical scenes depicting bias and debiasing strategies.