Goto

Collaborating Authors

The Ninth International Conference on Machine Learning

AI Magazine

The Ninth International Conference on Machine Learning was held in Aberdeen, Scotland, from 1-3 July 1992, with 198 participants in attendance. The conference covered a broad range of topics drawn from the general area of machine learning, including concept-learning algorithms, clustering, speedup learning, formal analysis of learning systems, neural networks, genetic algorithms, and applications of machine learning. This article briefly touches on six selected talks that were of exceptional interest. Conference organizers were Derek Sleeman (conference chair) and Peter Edwards (local arrangements chair), both of the University of Aberdeen. Since the first machine-learning workshop was held at Carnegie-Mellon University (CMU) in July 1980, meetings have been held regularly, alternating between a more formal conference format and a more informal workshop format.


The Ninth International Conference on Machine Learning

AI Magazine

Laird's system is a greedy one, using hill climbing to Conference conference, some participants elected descriptions of funnels, operators organizers were Derek Sleeman (conference to stay an extra day to participate whose preconditions and postconditions chair) and Peter Edwards in one of several informal workshops. Because it is impossible to review Once the descriptions are obtained, Since the first machine-learning all the papers presented, this article planning becomes simply a matter of workshop was held at Carnegie-Mellon briefly touches on six selected talks search to identify a sequence of funnels University (CMU) in July 1980, that were, in my opinion, of exceptional with high probability of success. These capsule summaries Like Laird, Christiansen used an alternating between a more formal are intended to serve as a representative empirical test on the classic tray-tilting conference format and a more informal sample of the research manipulation problem to evaluate workshop format. This summer's presented at the conference; the the effectiveness of his approach. View, California) presented his clustering techniques to the problem All conference sessions were held work on dynamic optimization of of learning action models for a robotnavigation on the stately campus of King's College pure Prolog programs.



Learning to Schedule Straight-Line Code

Neural Information Processing Systems

Program execution speed on modem computers is sensitive, by a factor of two or more, to the order in which instructions are presented to the processor. To realize potential execution efficiency, an optimizing compiler must employ a heuristic algorithm for instruction scheduling. Such algorithms are painstakingly handcrafted, which is expensive and time-consuming. We show how to cast the instruction scheduling problem as a learning task, obtaining the heuristic scheduling algorithm automatically. Our focus is the narrower problem of scheduling straight-line code (also called basic blocks of instructions). Our empirical results show that just a few features are adequate for quite good performance at this task for a real modem processor, and that any of several supervised learning methods perform nearly optimally with respect to the features used.


Learning to Schedule Straight-Line Code

Neural Information Processing Systems

Program execution speed on modem computers is sensitive, by a factor of two or more, to the order in which instructions are presented to the processor. To realize potential execution efficiency, an optimizing compiler must employ a heuristic algorithm for instruction scheduling. Such algorithms are painstakingly handcrafted, which is expensive and time-consuming. We show how to cast the instruction scheduling problem as a learning task, obtaining the heuristic scheduling algorithm automatically. Our focus is the narrower problem of scheduling straight-line code (also called basic blocks of instructions). Our empirical results show that just a few features are adequate for quite good performance at this task for a real modem processor, and that any of several supervised learning methods perform nearly optimally with respect to the features used.