Goto

Collaborating Authors

 North America Government


AAAI News

AI Magazine

July Conference Highlights m An AI Art Exhibition will showcase robot weighing in at 300 lbs., from the use of AI in serious works SRI; Bert and Ernie, midget-sized Again this year, AAAI is staging the of art. AIon-Line, five audience-interactive Technologies, untethered and battery with the National Conference on Artificial AI user panels, will offer practical powered, from NASA-JSC; William, a Intelligence, with 19 deployed learning on key business and feel-its-way robot from MIT and JPL; applications selected for presentation organization issues, based on case Flash, a Denning mobile platform from entries from around the world. A series of invited speakers and guidance from MITRE; Flash dimension to the conference, panels will complement the refereed Zorton, a walking machine designed promising give-and-take discussions papers and introduce areas of to compete in the robotic decathlon, about AI in operation. AI research that have unusual from Ecole Polytechnique of Montreal; AAAI-92 offers a series of technical interest and application. The of Southern California/Information The AAAI Robot Rules capture the National Conference is the year's Sciences Institute, and Peter spirit of the competition, indicating, largest meeting ground for those Szolovits, Associate Professor of Computer "It will not be slick, polished...there interested in AI, from scientific, academic, Science at the Massachusetts will be a certain amount of chaos, and business communities. This year's program is particularly There is a serious purpose, diverse, with concentration on research AAAI To Include New Dean noted, "to bring together areas results that bridge the gaps between AI Robotics Competition of AI including those working in perception, the different AI technologies and the AAAI will have its first AI Robotics Highlights, including AAAI-92 National Conference in San facilitate this and to make 34 focused technical sessions, with Jose, California July 12-16, 1992.


Intelligent Multimedia Interfaces

AI Magazine

On Monday, 15 July 1991, prior to the Ninth National Conference on Artificial Intelligence (AAAI-91) in Anaheim, California, over 50 scientists and engineers attended the AAAI-91 Workshop on Intelligent Multimedia Interfaces. The purpose of the workshop was threefold: (1) bring together researchers and practitioners to report on current advances in intelligent multimedia interface systems and their underlying theories, (2) foster scientific interchange among these individuals, and (3) evaluate current efforts and make recommendations for future investigations.




Adjoint-Functions and Temporal Learning Algorithms in Neural Networks

Neural Information Processing Systems

The development of learning algorithms is generally based upon the minimization of an energy function. It is a fundamental requirement to compute the gradient of this energy function with respect to the various parameters of the neural architecture, e.g., synaptic weights, neural gain,etc. In principle, this requires solving a system of nonlinear equations for each parameter of the model, which is computationally very expensive. A new methodology for neural learning of time-dependent nonlinear mappings is presented. It exploits the concept of adjoint operators to enable a fast global computation of the network's response to perturbations in all the systems parameters. The importance of the time boundary conditions of the adjoint functions is discussed. An algorithm is presented in which the adjoint sensitivity equations are solved simultaneously (Le., forward in time) along with the nonlinear dynamics of the neural networks. This methodology makes real-time applications and hardware implementation of temporal learning feasible.


Adjoint-Functions and Temporal Learning Algorithms in Neural Networks

Neural Information Processing Systems

The development of learning algorithms is generally based upon the minimization ofan energy function. It is a fundamental requirement to compute the gradient of this energy function with respect to the various parameters ofthe neural architecture, e.g., synaptic weights, neural gain,etc. In principle, this requires solving a system of nonlinear equations for each parameter of the model, which is computationally very expensive. A new methodology for neural learning of time-dependent nonlinear mappings is presented. It exploits the concept of adjoint operators to enable a fast global computation of the network's response to perturbations in all the systems parameters. The importance of the time boundary conditions of the adjoint functions is discussed. An algorithm is presented in which the adjoint sensitivity equations are solved simultaneously (Le., forward in time) along with the nonlinear dynamics of the neural networks. This methodology makes real-time applications and hardware implementation of temporal learning feasible.


On the Circuit Complexity of Neural Networks

Neural Information Processing Systems

Viewing n-variable boolean functions as vectors in'R'2", we invoke tools from linear algebra and linear programming to derive new results on the realizability of boolean functions using threshold gat.es. Using this approach, one can obtain: (1) upper-bounds on the number of spurious memories in HopfielJ networks, and on the number of functions implementable by a depth-d threshold circuit; (2) a lower bound on the number of ort.hogonal input.


Connectionist Approaches to the Use of Markov Models for Speech Recognition

Neural Information Processing Systems

Previous work has shown the ability of Multilayer Perceptrons (MLPs) to estimate emission probabilities for Hidden Markov Models (HMMs). The advantages of a speech recognition system incorporating both MLPs and HMMs are the best discrimination and the ability to incorporate multiple sources of evidence (features, temporal context) without restrictive assumptions of distributions or statistical independence. This paper presents results on the speaker-dependent portion of DARPA's English language Resource Management database. Results support the previously reported utility of MLP probability estimation for continuous speech recognition. An additional approach we are pursuing is to use MLPs as nonlinear predictors for autoregressive HMMs. While this is shown to be more compatible with the HMM formalism, it still suffers from several limitations. This approach is generalized to take account of time correlation between successive observations, without any restrictive assumptions about the driving noise. 1 INTRODUCTION We have been working on continuous speech recognition using moderately large vocabularies (1000 words) [1,2].


On the Circuit Complexity of Neural Networks

Neural Information Processing Systems

Viewing n-variable boolean functions as vectors in'R'2", we invoke tools from linear algebra and linear programming to derive new results on the realizability of boolean functions using threshold gat.es. Using this approach, one can obtain: (1) upper-bounds on the number of spurious memories in HopfielJ networks, and on the number of functions implementable by a depth-d threshold circuit; (2) a lower bound on the number of ort.hogonal input.


Where's the AI?

AI Magazine

I survey four viewpoints about what AI is. I describe a program exhibiting AI as one that can change as a result of interactions with the user. Such a program would have to process hundreds or thousands of examples as opposed to a handful. Because AI is a machine's attempt to explain the behavior of the (human) system it is trying to model, the ability of a program design to scale up is critical. Researchers need to face the complexities of scaling up to programs that actually serve a purpose. The move from toy domains into concrete ones has three big consequences for the development of AI. First, it will force software designers to face the idiosyncrasies of its users. Second, it will act as an important reality check between the language of the machine, the software, and the user. Third, the scaled-up programs will become templates for future work. For a variety of reasons, some of which I discuss one of the following four things: (1) AI means in this article, the newly formed Institute magic bullets, (2) AI means inference engines, for the Learning Sciences has been concentrating (3) AI means getting a machine to do something its efforts on building high-quality you didn't think a machine could do educational software for use in business and (the "gee whiz" view), and (4) AI means elementary and secondary schools. In the two having a machine learn.