Goto

Collaborating Authors

 Ward, Nigel


Spoken Language Interaction with Robots: Research Issues and Recommendations, Report from the NSF Future Directions Workshop

arXiv.org Artificial Intelligence

With robotics rapidly advancing, more effective human-robot interaction is increasingly needed to realize the full potential of robots for society. While spoken language must be part of the solution, our ability to provide spoken language interaction capabilities is still very limited. The National Science Foundation accordingly convened a workshop, bringing together speech, language, and robotics researchers to discuss what needs to be done. The result is this report, in which we identify key scientific and engineering advances needed. Our recommendations broadly relate to eight general themes. First, meeting human needs requires addressing new challenges in speech technology and user experience design. Second, this requires better models of the social and interactive aspects of language use. Third, for robustness, robots need higher-bandwidth communication with users and better handling of uncertainty, including simultaneous consideration of multiple hypotheses and goals. Fourth, more powerful adaptation methods are needed, to enable robots to communicate in new environments, for new tasks, and with diverse user populations, without extensive re-engineering or the collection of massive training data. Fifth, since robots are embodied, speech should function together with other communication modalities, such as gaze, gesture, posture, and motion. Sixth, since robots operate in complex environments, speech components need access to rich yet efficient representations of what the robot knows about objects, locations, noise sources, the user, and other humans. Seventh, since robots operate in real time, their speech and language processing components must also. Eighth, in addition to more research, we need more work on infrastructure and resources, including shareable software modules and internal interfaces, inexpensive hardware, baseline systems, and diverse corpora.



A Flexible, Parallel Generator of Natural Language

AI Magazine

My Ph.D. thesis (Ward 1992, 1991)1 addressed the task of generating natural language utterances. Current generators only accept input that are relatively poor in information, such as feature structures or lists of propositions; they are unable to deal with input rich in information, as one might expect from, for example, an expert system with a complete model of its domain or a natural language understander with good inference ability. FIG is based on a single associative network that encodes lexical knowledge, syntactic knowledge, and world knowledge. Thus, FIG is a spreading activation or structured connectionist system (Feldman et al.


A Flexible, Parallel Generator of Natural Language

AI Magazine

My Ph.D. thesis (Ward 1992, 1991)1 addressed the task of generating natural language utterances. It was motivated by two difficulties in scaling up existing generators. Current generators only accept input that are relatively poor in information, such as feature structures or lists of propositions; they are unable to deal with input rich in information, as one might expect from, for example, an expert system with a complete model of its domain or a natural language understander with good inference ability. Current generators also have a very restricted knowledge of language -- indeed, they succeed largely because they have few syntactic or lexical options available (McDonald 1987) -- and they are unable to cope with more knowledge because they deal with interactions among the various possible choices only as special cases. To address these and other issues, I built a system called FIG (flexible incremental generator). FIG is based on a single associative network that encodes lexical knowledge, syntactic knowledge, and world knowledge. Computation is done by spreading activation across the network, supplemented with a small amount of symbolic processing. Thus, FIG is a spreading activation or structured connectionist system (Feldman et al. 1988).



Review of Machine Translation: Past, Present, Future

AI Magazine

Hutchins not only presents machine translation research (such as problems of machine translation It is the theories, algorithms, and designs practical versus theoretical, empirical also not clear that the AI philosophy but also the history, goals, assumptions, versus perfectionist, and direct versus of understanding and meaning (p 327) and constraints of each project.