Goto

Collaborating Authors

 Classics


Tight performance bounds on greedy policies based on imperfect value functions

Williams, R. J. | Baird, L. C. I.

Classics

Reinforcement learning is an effective technique for learning action policies in discrete stochastic environments, but its efficiency can decay exponentially with the size of the state space. In many situations significant portions of a large state space may be irrelevant to a specific goal and can be aggregated into a few, relevant, states. The U Tree algorithm generates a tree based state discretization that efficiently finds the relevant state chunks of large propositional domains. In this paper, we extend the U Tree algorithm to challenging domains with a continuous state space for which there is no initial discretization.



Symbolic Model Checking

McMillan, K. L.

Classics

Formal verification means having a mathematical model of a system, a language for specifying desired properties of the system in a concise, comprehensible and unambiguous way, and a method of proof to verify that the specified properties are satisfied. When the method of proof is carried out substantially by machine, we speak of automatic verification. Symbolic Model Checking deals with methods of automatic verification as applied to computer hardware. The practical motivation for study in this area is the high and increasing cost of correcting design errors in VLSI technologies. There is a growing demand for design methodologies that can yield correct designs on the first fabrication run.


A review of statistical data association techniques for motion correspondence

Cox, I.

Classics

Motion correspondence is a fundamental problem in computer vision and many other disciplines. This article describes statistical data association techniques originally developed in the context of target tracking and surveillance and now beginning to be used in dynamic motion analysis by the computer vision community. The Mahalanobis distance measure is first introduced before discussing the limitations of nearest neighbor algorithms. Then, the track-splitting, joint likelihood, multiple hypothesis algorithms are described, each method solving an increasingly more complicated optimization. Real-time constraints may prohibit the application of these optimal methods.


Sequencing and scheduling: Algorithms and complexity

Lawler, E. L. | Lenstra, J. K. | Kan, A. | Shmoys, D. B.

Classics

Sequencing and scheduling'as a research area is motivated by questions that We review complexity results and'optimization and approximation algorithms The chapter is organized as follows. There are several survey papers that complement the present chapter. In this section, we will review the main points of this theory. NPcompleteness of a particular problem is strong evidence that a polynomial-lime algorithm for its solution is unlikely to exist. The wide applicability of the notion of NPcompleteness was observed by Karp, who proved that 21 basic problems are NPcomplete.


Machine discovery of effective admissible heuristics

Prieditis, A. E.

Classics

Admissible heuristics are an important class of heuristics worth discovering: they guarantee shortest path solutions in search algorithms such asA* and they guarantee less expensively produced, but boundedly longer solutions in search algorithms such as dynamic weighting. Unfortunately, effective (accurate and cheap to compute) admissible heuristics can take years for people to discover. Several researchers have suggested that certain transformations of a problem can be used to generate admissible heuristics. This article defines a more general class of transformations, calledabstractions, that are guaranteed to generate only admissible heuristics. It also describes and evaluates an implemented program (Absolver II) that uses a means-ends analysis search control strategy to discover abstracted problems that result in effective admissible heuristics.


Automatically constructing a dictionary for information extraction tasks

Riloff, E.

Classics

Knowledge-based natural language processing systems have achieved good success with certain tasks but they are often criticized because they depend on a domain-specific dictionary that requires a great deal of manual knowledge engineering. This knowledge engineering bottleneck makes knowledge-based NLP systems impractical for real-world applications because they cannot be easily scaled up or ported to new domains. In response to this problem, we developed a system called AutoSlog that automatically builds a domain-specific dictionary of concepts for extracting information from text. Using AutoSlog, we constructed a dictionary for the domain of terrorist event descriptions in only 5 person-hours. We then compared the AutoSlog dictionary with a handcrafted dictionary that was built by two highly skilled graduate students and required approximately 1500 person-hours of effort. We evaluated the two dictionaries using two blind test sets of 100 texts each. Overall, the AutoSlog dictionary achieved 98% of the performance of the handcrafted dictionary. On the first test set, the Auto-Slog dictionary obtained 96.3% of the performance of the handcrafted dictionary. On the second test set, the overall scores were virtually indistinguishable with the AutoSlog dictionary achieving 99.7% of the performance of the handcrafted dictionary.


Parsing English with a link grammar

Sleator, D. | Temperley, D.

Classics

In just 3 minutes, help us better understand how you perceive arXiv. We gratefully acknowledge support from the Simons Foundation and member institutions.


Complexity results for serial decomposability

Bylander, T.

Classics

Chalasani et al. show that this problem is Korf (1985) presents a method for learning macrooperators in NP, but NPcompleteness is open. Tadepalli (1991a, and shows that the method is applicable 1991b) shows how macro tables are polynomially PAClearnable to serially decomposable problems.


On the subjective meaning of probability

de Finetti, B.

Classics

Pragmatism, taken not just as a philosophical movement but as a way of addressing problems, strongly influenced the debate on the foundations of probability during the first half of the twentieth century. Upholders of different interpretations of probability such as Hans Reichenbach, Ernest Nagel, Rudolf Carnap, Frank Ramsey, and Bruno de Finetti, acknowledged their debt towards pragmatist philosophers, including Charles Sanders Peirce, William James, Clarence Irving Lewis, William Dewey and Giovanni Vailati. In addition, scientist-philosophers like Ernst Mach, Ludwig Boltzmann, Henri Poincaré, Pierre Duhem, and Karl Pearson, who heralded a conception of science and knowledge at large that was close to pragmatism, were very influential in that debate. Among the main interpretations of probability - frequentism, propensionism, logicism and subjectivism -, the latter is no doubt the closest to the pragmatist outlook. This paper concentrates on three representatives of the subjective theory, namely Frank Ramsey, Bruno de Finetti and Émile Borel.