Epstein, Susan L.
Robots in the Garden: Artificial Intelligence and Adaptive Landscapes
Zhang, Zihao, Epstein, Susan L., Breen, Casey, Xia, Sophia, Zhu, Zhigang, Volkmann, Christian
This paper introduces ELUA, the Ecological Laboratory for Urban Agriculture, a collaboration among landscape architects, architects and computer scientists who specialize in artificial intelligence, robotics and computer vision. ELUA has two gantry robots, one indoors and the other outside on the rooftop of a 6-story campus building. Each robot can seed, water, weed, and prune in its garden. To support responsive landscape research, ELUA also includes sensor arrays, an AI-powered camera, and an extensive network infrastructure. This project demonstrates a way to integrate artificial intelligence into an evolving urban ecosystem, and encourages landscape architects to develop an adaptive design framework where design becomes a long-term engagement with the environment.
Navigation, Cognitive Spatial Models, and the Mind
Epstein, Susan L. (Hunter College and the Graduate Center of the City University of New York)
Because navigation produces readily observable actions, it provides an important window into how perception and reasoning support intelligent behavior. This paper summarizes recent results in navigation from the perspectives of cognitive neuroscience, cognitive psychology, and cognitive robotics. Together they argue for the significance of a learned spatial cognitive model. The feasibility of such a model for navigation is demonstrated, and important issues raised for a standard model of the mind.
Toward Crowd-Sensitive Path Planning
Aroor, Anoop (City University of New York) | Epstein, Susan L. (Hunter College, City University of New York)
If a robot can predict crowds in parts of its environment that are inaccessible to its sensors, then it can plan to avoid them. This paper proposes a fast, online algorithm that learns average crowd densities in different areas. It also describes how these densities can be incorporated into existing navigation architectures. In simulation across multiple challenging crowd scenarios, the robot reaches its target faster, travels less, and risks fewer collisions than if it were to plan with the traditional A* algorithm.
Case-Based Meta-Prediction for Bioinformatics
Yun, Xi (The Graduate Center of The City University of New York) | Epstein, Susan L. (The Graduate Center and Hunter College of The City University of New York) | Han, Weiwei (Jilin University) | Xie, Lei (The Graduate Center and Hunter College of The City University of New York)
Before laboratory testing, bioinformatics problems often require a machine-learned predictor to identify the most likely choices among a wealth of possibilities. Researchers may advocate different predictors for the same problem, none of which is best in all situations. This paper introduces a case-based meta-predictor that combines a set of elaborate, pre-existing predictors to improve their accuracy on a difficult and important problem: protein-ligand docking. The method focuses on the reliability of its component predictors, and has broad potential applications in biology and chemistry. Despite noisy and biased input, the method outperforms its individual components on benchmark data. It provides a promising solution for the performance improvement of compound virtual screening, which would thereby reduce the time and cost of drug discovery.
Learning to Avoid Collisions
Sklar, Elizabeth (Brooklyn College, City University of New York) | Parsons, Simon (Brooklyn College, City University of New York) | Epstein, Susan L. (Hunter College, City University of New York) | Ozgelen, Arif Tuna (The Graduate Center, City University of New York) | Munoz, Juan Pablo (The Graduate Center, City University of New York) | Abbasi, Farah (College of Staten Island, City University of New York) | Schneider, Eric (Hunter College, City University of New York) | Costantino, Michael (College of Staten Island, City University of New York)
Members of a multi-robot team, operating within close quarters, need to avoid crashing into each other. Simple collision avoidance methods can be used to prevent such collisions, typically by computing the distance to other robots and stopping, perhaps moving away, when this distance falls below a certain threshold. While this approach may avoid disaster, it may also reduce the team's efficiency if robots halt for a long time to let others pass by or if they travel further to move around one another. This paper reports on experiments where a human operator, through a graphical user interface, watches robots perform an exploration task. The operator can manually suspend robots' movements before they crash into each other, and then resume their movements when their paths are clear. Experiment logs record the robots' states when they are paused and resumed. A behavior pattern for collision avoidance is learned, by classifying the states of the robots' environment when the human operator issues "wait" and "resume" commands. Preliminary results indicate that it is possible to learn a classifier which models these behavior patterns, and that different human operators consider different factors when making decisions about stopping and starting robots.
The Role of Knowledge and Certainty in Understanding for Dialogue
Epstein, Susan L. (Hunter College and The Graduate Center of The City University of New York) | Passonneau, Rebecca (Columbia University) | Gordon, Joshua (Columbia University) | Ligorio, Tiziana (The Graduate School of The City University of New York)
As people engage in increasingly complex conversations with computers, the need for generality and flexibility in spoken dialogue systems becomes more apparent. This paper describes how three different spoken dialogue systems for the same task reason with knowledge and certainty as they seek to understand what people want. It advocates systems that exploit partial understanding, consider credibility, and are aware both of what they know and of their certainty that it matches their users’ intent.
A Framework in which Robots and Humans Help Each Other
Sklar, Elizabeth (Brooklyn College, City University of New York) | Epstein, Susan L. (Hunter College, City University of New York) | Parsons, Simon (Brooklyn College, City University of New York) | Ozgelen, Arif T. (The Graduate Center, City University of New York) | Munoz, Juan Pablo (Brooklyn College, City University of New York) | Gonzalez, Joel (City College, City University of New York)
Within the context of human/multi-robot teams, the "help me help you" paradigm offers different opportunities. A team of robots can help a human operator accomplish a goal, and a human operator can help a team of robots accomplish the same, or a different, goal. Two scenarios are examined here. First, a team of robots helps a human operator search a remote facility by recognizing objects of interest. Second, the human operator helps the robots improve their position (localization) information by providing quality control feedback.
Helping Agents Help Their Users Despite Imperfect Speech Recognition
Gordon, Joshua B. (Columbia University) | Passonneau, Rebecca J. (Columbia University) | Epstein, Susan L. (Hunter College and The Graduate Center of The City University of New York )
Spoken language is an important and natural way for people to communicate with computers. Nonetheless, habitable, reliable, and efficient human-machine dialogue remains difficult to achieve. This paper describes a multi-threaded semi-synchronous architecture for spoken dialogue systems. The focus here is on its utterance interpretation module. Unlike most architectures for spoken dialogue systems, this new one is designed to be robust to noisy speech recognition through earlier reliance on context, a mixture of rationales for interpretation, and fine-grained use of confidence measures. We report here on a pilot study that demonstrates its robust understanding of users’ objectives, and we compare it with our earlier spoken dialogue system implemented in a traditional pipeline architecture. Substantial improvements appear at all tested levels of recognizer performance.
Toward Spoken Dialogue as Mutual Agreement
Epstein, Susan L. (Hunter College and The Graduate Center of The City University of New York) | Gordon, Joshua (Columbia University) | Passonneau, Rebecca (Columbia University) | Ligorio, Tiziana (The Graduate Center of The City University of New York)
The social and collaborative nature of dialogue challenges A spoken dialogue system (SDS) has a social role: it supposedly an SDS in many ways. The spontaneity of dialogue gives allows people to communicate with a computer in rise to disfluencies, where a person repeats or interrupts ordinary language. A robust SDS should support coherent herself, produces filled pauses or false starts and selfrepairs. Disfluencies play a fundamental role in dialogue, and habitable dialogue, even when it confronts situations as signals for turn-taking (Gravano, 2009; Sacks, Schegloff for which it has no explicit pre-specified behavior. To ensure robust task completion, however, SDS designers typically and Jefferson, 1974) and for grounding to establish shared produce systems that make a sequence of rigid demands beliefs about the current state of mutual understanding on the user, and thereby lose any semblance of natural (Clark and Schaefer, 1989). Most SDSs handle the content dialogue. The thesis of our work is that a dialogue of the user's utterances, but do not fully integrate the way they address utterance meaning, disfluencies, turn-taking should evolve as a set of agreements that arise from joint and the collaborative nature of grounding.
From Unsolvable to Solvable: An Exploration of Simple Changes
Epstein, Susan L. (The City University of New York) | Yun, Xi (The City University of New York)
This paper investigates how readily an unsolvable constraint satisfaction problem can be reformulated so that it becomes solvable. We investigate small changes in the definitions of the problemís constraints, changes that alter neither the structure of its constraint graph nor the tightness of its constraints. Our results show that structured and unstructured problems respond differently to such changes, as do easy and difficult problems taken from the same problem class. Several plausible explanations for this behavior are discussed.