Goto

Collaborating Authors

 taxnodes:Technology: Instructional Materials


The Sixth International Workshop on Nonmonotonic Reasoning

AI Magazine

Intelligence (AAAI), was held 10 to 12 have now become particularly June 1996 in Timberline, Oregon. Finally, we Netherlands, the United States, and would like to acknowledge the support Venezuela. The papers described new of AAAI for student travel funds. Moises Goldszmidt received his Ph.D. in His email address is moises@ Mathematical Institute in Russia.


Artificial Intelligence: What Works and What Doesn't?

AI Magazine

AI has been well supported by government research and development dollars for decades now, and people are beginning to ask hard questions: What really works? What are the limits? What doesn't work as advertised? What isn't likely to work? What isn't affordable? This article holds a mirror up to the community, both to provide feedback and stimulate more self-assessment. The significant accomplishments and strengths of the field are highlighted. The research agenda, strategy, and heuristics are reviewed, and a change of course is recommend-ed to improve the field's ability to produce reusable and interoperable components.


Gaps and Bridges: New Directions in Planning and Natural Language Generation

AI Magazine

The workshop entitled "Gaps and Bridges: New Directions in Planning and Natural Language Generation" was held on 12 August 1996 in Budapest, Hungary. This article describes the four sessions of the workshop and summarizes the important themes that were revealed.


Adaptive Back-Propagation in On-Line Learning of Multilayer Networks

Neural Information Processing Systems

This research has been motivated by the dominance of the suboptimal symmetric phase in online learning of two-layer feedforward networks trained by gradient descent [2]. This trapping is emphasized for inappropriate small learning rates but exists in all training scenarios, effecting the learning process considerably. We Adaptive Back-Propagation in Online Learning of Multilayer Networks 329 proposed an adaptive back-propagation training algorithm [Eq.


Adaptive Back-Propagation in On-Line Learning of Multilayer Networks

Neural Information Processing Systems

This research has been motivated by the dominance of the suboptimal symmetric phase in online learning of two-layer feedforward networks trained by gradient descent [2]. This trapping is emphasized for inappropriate small learning rates but exists in all training scenarios, effecting the learning process considerably. We Adaptive Back-Propagation in Online Learning of Multilayer Networks 329 proposed an adaptive back-propagation training algorithm [Eq.


Hybrid Connectionist-Symbolic Modules: A Report from the IJCAI-95 Workshop on Connectionist-Symbolic Integration

AI Magazine

The Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches was held on 19 to 20 August 1995 in Montreal, Canada, in conjunction with the Fourteenth International Joint Conference on Artificial Intelligence. The focus of the workshop was on learning and architectures that feature hybrid representations and support hybrid learning. The general consensus was that hybrid connectionist-symbolic models constitute a promising avenue to the development of more robust, more powerful, and more versatile architectures for both cognitive modeling and intelligent systems.


IJCAI-95 Workshop on Adaptation and Learning in Multiagent Systems

AI Magazine

The goal of the Workshop on Adaptation and Learning in Multiagent Systems was to focus on research that addresses unique requirements for agents learning and adapting to work in the presence of other agents. Recognizing the applicability and limitations of current machine-learning research as applied to multiagent problems and developing new learning and adaptation mechanisms particularly targeted to this class of problems were the primary research issues that we wanted the authors to address. This article outlines the presentations that were made at the workshop and the success of the workshop in meeting the established goals. Issues that need to be better understood are also presented.


Thirteenth International Distributed AI Workshop

AI Magazine

The goal of this workshop was which was held in June 1995 in San istributed artificial intelligence the cooperative solution of "making connections," trying to better Francisco. The DAI Workshop problems in multiagent intelligent understand the connections received financial support from the systems with both computational between DAI and related fields (for American Association for Artificial and human agents. The central problem example, computer-supported cooperative Intelligence as well as the Boeing in DAI is how to achieve coordinated work, group decision support Company. Registration materials for the Thirteenth National Conference on Artificial Intelligence (AAAI-96), the Eighth Innovative Applications of Artificial Intelligence Conference (IAAI-96), and the Second International Conference on Knowledge Discovery and Data Mining (KDD-96) are now available from the AAAI office at ncai@aaai.org Copies of the AAAI-96 registration brochure are being mailed to all AAAI members.


On-line Learning of Dichotomies

Neural Information Processing Systems

The performance of online algorithms for learning dichotomies is studied. In online learning, the number of examples P is equivalent to the learning time, since each example is presented only once. The learning curve, or generalization error as a function of P, depends on the schedule at which the learning rate is lowered.


On-line Learning of Dichotomies

Neural Information Processing Systems

The performance of online algorithms for learning dichotomies is studied. In online learning, thenumber of examples P is equivalent to the learning time, since each example is presented only once. The learning curve, or generalization error as a function of P, depends on the schedule at which the learning rate is lowered. For a target that is a perceptron rule, the learning curve of the perceptron algorithm can decrease as fast as p-1,if the schedule is optimized. If the target is not realizable by a perceptron, the perceptron algorithm does not generally converge to the solution with lowest generalization error.