Goto

Collaborating Authors

 Educational Setting



FLAIRS 2000 Conference Report

AI Magazine

LBD is a curriculum consisting of prescribed exercises that teach children real-world skills by ciently, and replan after device faults having them perform several activities Conference of the Florida caused the original plan to become that are familiar to them. The cochairs of about the computer's role in the current The conference also had two panel the conference were Avelino Gonzalez, revolution in cognitive science. The first focused on modern University of Central Florida, and His talk came from a historical perspective--how trends in funding opportunities Massood Towhidnejad, Embry-Riddle humankind has always for AI, moderated by Ingrid Russell of Aeronautical University. The program felt an overwhelming need to understand the University of Hartford. This group chairs were Bill Manaris and Jim the world around us and to control included an impressive list of panelists: Etheredge, both of the University of it for our own benefit.


Robust Neural Network Regression for Offline and Online Learning

Neural Information Processing Systems

Although one can derive the Gaussian noise assumption based on a maximum entropy approach, the main reason for this assumption is practicability: underthe Gaussian noise assumption the maximum likelihood parameter estimate can simply be found by minimization of the squared error. Despite its common use it is far from clear that the Gaussian noise assumption is a good choice for many practical problems. Areasonable approach therefore would be a noise distribution which contains the Gaussian as a special case but which has a tunable parameter that allows for more flexible distributions.


Robust Neural Network Regression for Offline and Online Learning

Neural Information Processing Systems

Although one can derive the Gaussian noise assumption based on a maximum entropy approach, the main reason for this assumption is practicability: under the Gaussian noise assumption the maximum likelihood parameter estimate can simply be found by minimization of the squared error. Despite its common use it is far from clear that the Gaussian noise assumption is a good choice for many practical problems. A reasonable approach therefore would be a noise distribution which contains the Gaussian as a special case but which has a tunable parameter that allows for more flexible distributions.


The AAAI 1999 Mobile Robot Competitions and Exhibitions

AI Magazine

The Eighth Annual Mobile Robot Competition and Exhibition was held as part of the Sixteenth National Conference on Artificial Intelligence in Orlando, Florida, 18 to 22 July. The goals of these robot events are to foster the sharing of research and technology, allow research groups to showcase their achievements, encourage students to enter robotics and AI fields at both the undergraduate and graduate level, and increase awareness of the field. The 1999 events included two robot contests; a new, long-term robot challenge; an exhibition; and a National Botball Championship for high school teams sponsored by the KISS Institute. Each of these events is described in detail in this article.


On-Line Learning with Restricted Training Sets: Exact Solution as Benchmark for General Theories

Neural Information Processing Systems

Calculation of Q(t) and R(t) using (4, 5, 7, 9) to execute the path average and the average over sets is relatively straightforward, albeit tedious. We find that -"Yt(l -"Yt)


On-Line Learning with Restricted Training Sets: Exact Solution as Benchmark for General Theories

Neural Information Processing Systems

Calculation of Q(t) and R(t) using (4, 5, 7, 9) to execute the path average and the average over sets is relatively straightforward, albeit tedious. We find that -"Yt(l -"Yt)


Linear Hinge Loss and Average Margin

Neural Information Processing Systems

We describe a unifying method for proving relative loss bounds for online linearthreshold classification algorithms, such as the Perceptron and the Winnow algorithms. For classification problems the discrete loss is used, i.e., the total number of prediction mistakes. We introduce a continuous lossfunction, called the "linear hinge loss", that can be employed to derive the updates of the algorithms. We first prove bounds w.r.t. the linear hinge loss and then convert them to the discrete loss. We introduce anotion of "average margin" of a set of examples . We show how relative loss bounds based on the linear hinge loss can be converted to relative loss bounds i.t.o. the discrete loss using the average margin.


AAAI News

AI Magazine

Students interested in attending the National Conference on Artificial Intelligence in Austin, July 30-August 3, 2000, should consult the AAAI web site for further information about the Student Abstract program and the Doctoral Consortium. Details about these programs have also been mailed to all AAAI members. The Scholarship Program provides partial travel support and a complimentary technical program registration for students who (1) are full-time undergraduate or graduate students at colleges and universities; (2) are members of AAAI; (3) submit papers to the technical program or letters of recommendation from their faculty adviser; and (4) submit scholarship applications to AAAI by April 15, 2000. In addition, repeat scholarship applicants must have fulfilled the volunteer and reporting requirements for previous awards. In the event that scholarship applications AAAI President David Waltz presented The 1999 AAAI Classic Paper Award to exceed available funds, preference John McDermott for R1: An Expert in the Computer Systems Domain.


On-line Learning from Finite Training Sets in Nonlinear Networks

Neural Information Processing Systems

Online learning is one of the most common forms of neural network training. We present an analysis of online learning from finite training sets for nonlinear networks (namely, soft-committee machines), advancing the theory to more realistic learning scenarios. Dynamical equations are derived for an appropriate set of order parameters; these are exact in the limiting case of either linear networks or infinite training sets. Preliminary comparisons with simulations suggest that the theory captures some effects of finite training sets, but may not yet account correctly for the presence of local minima.