Not enough data to create a plot.
Try a different view from the menu above.
taxnodes:Technology: Instructional Materials
Adaptive Caching by Refetching
Gramacy, Robert B., Warmuth, Manfred K., Brandt, Scott A., Ari, Ismail
We are constructing caching policies that have 13-20% lower miss rates than the best of twelve baseline policies over a large variety of request streams. This represents an improvement of 49-63% over Least Recently Used, the most commonly implemented policy. We achieve this not by designing a specific new policy but by using online Machine Learning algorithms to dynamically shift between the standard policies based on their observed miss rates. A thorough experimental evaluation of our techniques is given, as well as a discussion of what makes caching an interesting online learning problem.
WEBODE in a Nutshell
Arpirez, Julio Cesar, Corcho, Oscar, Fernandez-Lopez, Mariano, Gomez-Perez, Asuncion
WEBODE is a scalable workbench for ontological engineering that eases the design, development, and management of ontologies and includes middleware services to aid in the integration of ontologies into real-world applications. WEBODE presents a framework to integrate new ontology-based tools and services, where developers only worry about the new logic they want to provide on top of the knowledge stored in their ontologies.
Editorial
I'm delighted to bring our readers the news of an exciting resource for AAAI members. AAAI has now completed a major initiative, begun five years ago, to develop a digital library of AAAI publications. The collection now comprises approximately 13,000 papers, including the full set of papers from the AAAI proceedings, papers from other major conferences, AAAI workshop and symposium technical reports, selected AAAI Press books, and the full contents of AI Magazine. This already-extensive collection is a growing resource, with new publications and access methods to be added over time. I encourage readers to visit it at the members' library section of the AAAI web site, www.aaai.org.
AAAI News
Chair: Terry Payne (trp@ecs.soton.ac.uk) nators should contact candidates prior Tentative Organizing AI Alert newsletter, which highlights they be elected. The deadline for Committee: Lloyd Greenwald selected features from the "AI in the nominations is November 1, 2003. Please mark your calendars now for Stanford University. Be sure Symposia/symposia.html) and will be and the Sixteenth Innovative Applications to visit the AI Topics web site at mailed to all AAAI members. Submissions of Artificial Intelligence Conference www.aaai.org/AITopics/aitopics.html will be due to the organizers on (IAAI-04)!
Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms
Khardon, Roni, Roth, Dan, Servedio, Rocco A.
We study online learning in Boolean domains using kernels which capture featureexpansions equivalent to using conjunctions over basic features. Wedemonstrate a tradeoff between the computational efficiency with which these kernels can be computed and the generalization ability ofthe resulting classifier. We first describe several kernel functions which capture either limited forms of conjunctions or all conjunctions. We show that these kernels can be used to efficiently run the Perceptron algorithmover an exponential number of conjunctions; however we also prove that using such kernels the Perceptron algorithm can make an exponential number of mistakes even when learning simple functions. Wealso consider an analogous use of kernel functions to run the multiplicative-update Winnow algorithm over an expanded feature space of exponentially many conjunctions. While known upper bounds imply that Winnow can learn DNF formulae with a polynomial mistake bound in this setting, we prove that it is computationally hard to simulate Winnow's behaviorfor learning DNF over such a feature set, and thus that such kernel functions for Winnow are not efficiently computable.
Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms
Khardon, Roni, Roth, Dan, Servedio, Rocco A.
We study online learning in Boolean domains using kernels which capture feature expansions equivalent to using conjunctions over basic features. We demonstrate a tradeoff between the computational efficiency with which these kernels can be computed and the generalization ability of the resulting classifier. We first describe several kernel functions which capture either limited forms of conjunctions or all conjunctions. We show that these kernels can be used to efficiently run the Perceptron algorithm over an exponential number of conjunctions; however we also prove that using such kernels the Perceptron algorithm can make an exponential number of mistakes even when learning simple functions. We also consider an analogous use of kernel functions to run the multiplicative-update Winnow algorithm over an expanded feature space of exponentially many conjunctions. While known upper bounds imply that Winnow can learn DNF formulae with a polynomial mistake bound in this setting, we prove that it is computationally hard to simulate Winnow's behavior for learning DNF over such a feature set, and thus that such kernel functions for Winnow are not efficiently computable.
On the Generalization Ability of On-Line Learning Algorithms
Cesa-bianchi, Nicolò, Conconi, Alex, Gentile, Claudio
In this paper we show that online algorithms for classification and regression can be naturally used to obtain hypotheses with good datadependent tail bounds on their risk. Our results are proven without requiring complicated concentration-of-measure arguments and they hold for arbitrary online learning algorithms. Furthermore, when applied to concrete online algorithms, our results yield tail bounds that in many cases are comparable or better than the best known bounds.
On the Generalization Ability of On-Line Learning Algorithms
Cesa-bianchi, Nicolò, Conconi, Alex, Gentile, Claudio
In this paper we show that online algorithms for classification and regression canbe naturally used to obtain hypotheses with good datadependent tailbounds on their risk. Our results are proven without requiring complicated concentration-of-measure arguments and they hold for arbitrary online learning algorithms. Furthermore, when applied to concrete online algorithms, our results yield tail bounds that in many cases are comparable or better than the best known bounds.
Report on the First International Conference on Knowledge Capture (K-CAP)
Gil, Yolanda, Musen, Mark, Shavlik, Jude
Henry Lieberman surveyed successful techniques for programming by example, an approach where end users teach procedures to computers by demonstrating a sequence of actions on concrete examples as they how to accomplish it. This new conference series domain-independent inference practical exercises and illustrated promotes multidisciplinary research structures and reusable domain-specific the concepts with applications, including on tools and methodologies for efficiently ontologies. A related workshop of its knowledge content for communities. He received his Ph.D. in 1. portal.acm.org. For any inquiries, please email info@kcap.org.