Goto

Collaborating Authors

 Personal




Letters to the Editor

AI Magazine

Dr. Northrup Fowler III Rome Laboratory Recently I circulated the Waltz taxonomy MVL theorem proving taxonomy, I wonder if AAAI system available by anonymous ftp might not consider a broader review from Stanford. Systems architectures and thereby gain some sense 2. Loop detection and recursion control of current relative interest and, over in the underlying theorem prover. Featuring applications in: of the discipline as a whole relative 4. A fast unifier that includes an Banking and Finance a valuable service to those who serve sequence variables. Published by I'm surprised in a way that AAAI t.stanford.edu, AAAI Press hasn't already undertaken this effort, "anonymous" as your user name, followed as do other professional organizations by any password you wish.


Learning in Higher-Order "Artificial Dendritic Trees

Neural Information Processing Systems

The computational territory between the linearly summing McCulloch-Pitts neuron and the nonlinear differential equations of Hodgkin & Huxley is relatively sparsely populated. Connectionists use variants of the former and computational neuroscientists struggle with the exploding parameter spaces provided by the latter. However, evidence from biophysical simulations suggests that the voltage transfer properties of synapses, spines and dendritic membranes involve many detailed nonlinear interactions, not just a squashing function at the cell body. Real neurons may indeed be higher-order nets. For the computationally-minded, higher order interactions means, first of all, quadratic terms. This contribution presents a simple learning principle for a binary tree with a logistic/quadratic transfer function at each node. These functions, though highly nested, are shown to be capable of changing their shape in concert. The resulting tree structure receives inputs at its leaves, and outputs an estimate of the probability that the input pattern is a member of one of two classes at the top.


Learning in Higher-Order "Artificial Dendritic Trees

Neural Information Processing Systems

The computational territory between the linearly summing McCulloch-Pitts neuron and the nonlinear differential equations of Hodgkin & Huxley is relatively sparsely populated. Connectionists use variants of the former and computational neuroscientists struggle with the exploding parameter spaces provided by the latter. However, evidence from biophysical simulations suggests that the voltage transfer properties of synapses, spines and dendritic membranes involve many detailed nonlinear interactions, not just a squashing function at the cell body. Real neurons may indeed be higher-order nets. For the computationally-minded, higher order interactions means, first of all, quadratic terms. This contribution presents a simple learning principle for a binary tree with a logistic/quadratic transfer function at each node. These functions, though highly nested, are shown to be capable of changing their shape in concert. The resulting tree structure receives inputs at its leaves, and outputs an estimate of the probability that the input pattern is a member of one of two classes at the top.


Guest Editorial: Design for AI Researchers

AI Magazine

Design has long been an area of particular interest for AI researchers. Herbert Simon's 1968 Karl Taylor Compton lectures on the sciences of the artificial included substantial material on design. However, only recently have design researchers embraced paradigms from AI and AI researchers chosen design as a domain to study.


Letters to the Editor

AI Magazine

I appreciated very much the Spring 1990 issue of the AI Magazine on Robotic Assembly and Task Planning. It seems to me, however, that some good work that has been carried out on this subject in Europe during recent years has not been covered very much. Also commons on the low participation levels of women in the computer industry, suggestions for the inclusion of dissertation abstracts, comments on the Feldman article in the Fall 1990 issue, and a note about the discontinuance of plastic coverings on AI Magazine.


In Memoriam: Arthur Samuel: Pioneer in Machine Learning

AI Magazine

Arthur Samuel (1901-1990) was a pioneer of artificial intelligence research. From 1949 through the late 1960s, he did the best work in making computers learn from their expe-rience. His vehicle for this work was the game of checkers.


In Memoriam: Arthur Samuel: Pioneer in Machine Learning

AI Magazine

From 1949 through the late required to have his research more didn't finish 1960s, he did the best work in making vigorously followed up on. He was the computers learn from their experience. Programs for playing games often and what would be required to In 1949, Samuel joined IBM's fill the role in artificial intelligence reach human-level intelligence. Poughkeepsie Laboratory, where he research that the fruit fly Drosophila Samuel's papers on machine learning worked on IBM's first stored program plays in genetics. Drosophilae are are still worth studying.


Letters to the Editor

AI Magazine

Thus far, I believe, describing various approximately 120 copies have been limitations of QSIM. At the risk of distributed. The QSIM program is a being scolded again for "employing research tool, not a product, so any universal truths and unarguable commercial rights are retained, and I facts" in support of my position, I cannot warrant that it is free of bugs. Hall examination of the limitations of University of Texas at Austin one's own work is an invaluable Austin, Texas 78712 guide to further research. Akman observes, correctly, that References QSIM is a purely mathematical formalism for expressing qualitative differential Crawford, J.M., Farquhar, A., and Kuipers, 8. 1590 QPC: A Compiler from equation models of the Phvsical Models into Qualitative Differential world, and not a physical modeling Equations In Pr&eedings of the Thank you for publishing our reply Akman's letter refers to his difficulties to Prof. Kuipers in the last issue.