Ferguson, George


Crowd Formalization of Action Conditions

AAAI Conferences

Training intelligent systems is a time consuming and costly process that often limits their application to real-world problems. Prior work in crowdsourcing has attempted to compensate for this challenge by generating sets of labeled training data for machine learning algorithms. In this work, we seek to move beyond collecting just statistical data and explore how to gather structured, relational representations of a scenario using the crowd. We focus on activity recognition because of its broad applicability, high level of variation between individual instances, and difficulty of training systems a priori. We present ARchitect, a system that uses the crowd to ascertain pre and post conditions for actions observed in a video and find relations between actions. Our ultimate goal is to identify multiple valid execution paths from a single set of observations, which suggests one-off learning from the crowd is possible.


A Cognitive Model for Collaborative Agents

AAAI Conferences

We describe a cognitive model of a collaborative agent that can serve as the basis for automated systems that must collaborate with other agents, including humans, to solve problems. This model builds on standard approaches to cognitive architecture and intelligent agency, as well as formal models of speech acts, joint intention, and intention recognition. The model is nonetheless intended for practical use in the development of collaborative systems.


CARDIAC: An Intelligent Conversational Assistant for Chronic Heart Failure Patient Heath Monitoring

AAAI Conferences

We describe CARDIAC, a prototype for an intelligent conversational assistant that provides health monitoring for chronic heart failure patients. CARDIAC supports user initiative through its ability to understand natural language and connect it to intention recognition. The natural language interface allows patients to interact with CARDIAC without special training. The system is designed to understand information that arises spontaneously in the course of the interview. If the patient gives more detail than necessary for answering a question, the system updates the user model accordingly. CARDIAC is a first step towards developing cost-effective, customizable, automated in-home conversational assistants that help patients manage their care and monitor their health using natural language.


AAAI 2007 Spring Symposium Series Reports

AI Magazine

The 2007 Spring Symposium Series was held Monday through Wednesday, March 26-28, 2007, at Stanford University, California. The titles of the nine symposia in this symposium series were (1) Control Mechanisms for Spatial Knowledge Processing in Cognitive/Intelligent Systems, (2) Game Theoretic and Decision Theoretic Agents, (3) Intentions in Intelligent Systems, (4) Interaction Challenges for Artificial Assistants, (5) Logical Formalizations of Commonsense Reasoning, (6) Machine Reading, (7) Multidisciplinary Collaboration for Socially Assistive Robotics, (8) Quantum Interaction, and (9) Robots and Robot Venues: Resources for AI Education.


AAAI 2007 Spring Symposium Series Reports

AI Magazine

The 2007 Spring Symposium Series was held Monday through Wednesday, March 26-28, 2007, at Stanford University, California. The titles of the nine symposia in this symposium series were (1) Control Mechanisms for Spatial Knowledge Processing in Cognitive/Intelligent Systems, (2) Game Theoretic and Decision Theoretic Agents, (3) Intentions in Intelligent Systems, (4) Interaction Challenges for Artificial Assistants, (5) Logical Formalizations of Commonsense Reasoning, (6) Machine Reading, (7) Multidisciplinary Collaboration for Socially Assistive Robotics, (8) Quantum Interaction, and (9) Robots and Robot Venues: Resources for AI Education.


Mixed-Initiative Systems for Collaborative Problem Solving

AI Magazine

Mixed-initiative systems are a popular approach to building intelligent systems that can collaborate naturally and effectively with people. But true collaborative behavior requires an agent to possess a number of capabilities, including reasoning, communication, planning, execution, and learning. We describe an integrated approach to the design and implementation of a collaborative problem-solving assistant based on a formal theory of joint activity and a declarative representation of tasks. This approach builds on prior work by us and by others on mixed-initiative dialogue and planning systems.


Mixed-Initiative Systems for Collaborative Problem Solving

AI Magazine

Mixed-initiative systems are a popular approach to building intelligent systems that can collaborate naturally and effectively with people. But true collaborative behavior requires an agent to possess a number of capabilities, including reasoning, communication, planning, execution, and learning. We describe an integrated approach to the design and implementation of a collaborative problem-solving assistant based on a formal theory of joint activity and a declarative representation of tasks. This approach builds on prior work by us and by others on mixed-initiative dialogue and planning systems.


Toward Conversational Human-Computer Interaction

AI Magazine

The belief that humans will be able to interact with computers in conversational speech has long been a favorite subject in science fiction, reflecting the persistent belief that spoken dialogue would be the most natural and powerful user interface to computers. With recent improvements in computer technology and in speech and language processing, such systems are starting to appear feasible. There are significant technical problems that still need to be solved before speech-driven interfaces become truly conversational. This article describes the results of a 10-year effort building robust spoken dialogue systems at the University of Rochester.


Toward Conversational Human-Computer Interaction

AI Magazine

The belief that humans will be able to interact with computers in conversational speech has long been a favorite subject in science fiction, reflecting the persistent belief that spoken dialogue would be the most natural and powerful user interface to computers. With recent improvements in computer technology and in speech and language processing, such systems are starting to appear feasible. There are significant technical problems that still need to be solved before speech-driven interfaces become truly conversational. This article describes the results of a 10-year effort building robust spoken dialogue systems at the University of Rochester.