Plotting

Response to Sloman's Review of Affective Computing

AI Magazine

Affective cues are a natural way that humans give feedback to learning systems. My students and I currently use tools of expression recognition to gather data to hone the abilities of our research systems, always with the consent nontechnical users are in the majority, of those involved. However, Sloman's to Aaron Sloman for his their feelings and fears demand not remarks imply that I favor Sloman was one I use the expression emotion recognition even the relatively benign intrusions, of the first in the AI community to only when established as shorthand such as emotional agents that jiggle write about the role of emotion in for the unwieldy but more accurate about on the screen, smiling at you in computing (Sloman and Croucher description "inference of an an annoying and inappropriate fashion, 1981), and I value his insight into theories emotional state from observations of costing you precious time while of emotional and intelligent systems. The Although inappropriate use of affect largely on some details related to computer cannot directly read internal might be the most common affront unknown features of human emotion; thoughts or feelings, and therefore, with this technology, there are also hence, I don't think the review captures there is no "emotion detector" as potentially more serious problems the flavor of the book. It can detect certain expressions (chapter 4.) he does raise interesting points, as well that arise in conjunction with an Sloman writes that in lieu of being as potential misunderstandings, both internal state: pressure profiles of hooked up to emotion-sensing of which I am grateful for the opportunity banging on a mouse, video signals of devices, he would prefer us all to to comment on. What Sloman misses in more. The aphorism "if you detect in the foreseeable future is teacher and pupil." These users tend to not desires. In contexts where humans wake-up call to us: Current forms of understand the limits of the technology; interact with computers naturally and computer-mediated interaction limit they are already so amazed at what socially (Reeves and Nass 1996), we affective communication. For example, the computer computer, "Does it know that I don't might speed up if we seem Sloman's review might seem confusing like it?" At one time, I would have discounted bored, offer an alternate explanation if in places whether or not you've read such remarks, but now that we appear confused, and try to my book. When the athlete rattles off her list of feelings to the public eye, she rattles off not just what she thinks she feels but able to a misunderstanding about what or otherwise. In this flurry of comes from the Latin sentire, the root of modulation, which indeed exist, thoughts and feelings, she anticipates the words sentiment and sensation.) Sentic especially given an incomplete understanding an event and concludes, "The thought modulation, such as voice inflection, of the phenomena.


Modeling Belief in Dynamic Systems, Part II: Revision and Update

Journal of Artificial Intelligence Research

The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper (Friedman & Halpern, 1997), we introduce a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the change of beliefs over time. In this paper, we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method, and to better understand the principles underlying them. In particular, it shows that Katsuno and Mendelzon's notion of belief update (Katsuno & Mendelzon, 1991a) depends on several strong assumptions that may limit its applicability in artificial intelligence. Finally, our analysis allow us to identify a notion of minimal change that underlies a broad range of belief change operations including revision and update.


A Counter Example to Theorems of Cox and Fine

Journal of Artificial Intelligence Research

Cox's well-known theorem justifying the use of probability is shown not to hold in finite domains. The counterexample also suggests that Cox's assumptions are insufficient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative conditional probability.


Solving Highly Constrained Search Problems with Quantum Computers

Journal of Artificial Intelligence Research

A previously developed quantum search algorithm for solving 1-SAT problems in a single step is generalized to apply to a range of highly constrained k-SAT problems. We identify a bound on the number of clauses in satisfiability problems for which the generalized algorithm can find a solution in a constant number of steps as the number of variables increases. This performance contrasts with the linear growth in the number of steps required by the best classical algorithms, and the exponential number required by classical and quantum methods that ignore the problem structure. In some cases, the algorithm can also guarantee that insoluble problems in fact have no solutions, unlike previously proposed quantum search algorithms.


Efficient Implementation of the Plan Graph in STAN

Journal of Artificial Intelligence Research

STAN is a Graphplan-based planner, so-called because it uses a variety of STate ANalysis techniques to enhance its performance. STAN competed in the AIPS-98 planning competition where it compared well with the other competitors in terms of speed, finding solutions fastest to many of the problems posed. Although the domain analysis techniques STAN exploits are an important factor in its overall performance, we believe that the speed at which STAN solved the competition problems is largely due to the implementation of its plan graph. The implementation is based on two insights: that many of the graph construction operations can be implemented as bit-level logical operations on bit vectors, and that the graph should not be explicitly constructed beyond the fix point. This paper describes the implementation of STAN's plan graph and provides experimental results which demonstrate the circumstances under which advantages can be obtained from using this implementation.


Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity

arXiv.org Artificial Intelligence

The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's minimal sufficient statistic. In general we show that data compression is almost always the best strategy, both in hypothesis identification and prediction.


TDLeaf(lambda): Combining Temporal Difference Learning with Game-Tree Search

arXiv.org Artificial Intelligence

In this paper we present TDLeaf(lambda), a variation on the TD(lambda) algorithm that enables it to be used in conjunction with minimax search. We present some experiments in both chess and backgammon which demonstrate its utility and provide comparisons with TD(lambda) and another less radical variant, TD-directed(lambda). In particular, our chess program, ``KnightCap,'' used TDLeaf(lambda) to learn its evaluation function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating to a 2100 rating in just 308 games. We discuss some of the reasons for this success and the relationship between our results and Tesauro's results in backgammon.


The Canonical Distortion Measure in Feature Space and 1-NN Classification

Neural Information Processing Systems

We prove that the Canonical Distortion Measure (CDM) [2, 3] is the optimal distance measure to use for I nearest-neighbour (l-NN) classification, andshow that it reduces to squared Euclidean distance in feature space for function classes that can be expressed as linear combinations of a fixed set of features.


Multiple Threshold Neural Logic

Neural Information Processing Systems

This observation has boosted interest in the field of artificial neural networks [Hopfield 82], [Rumelhart 82]. The latter are built by interconnecting artificial neurons whose behavior is inspired by that of biological neurons.


Active Data Clustering

Neural Information Processing Systems

Active data clustering is a novel technique for clustering of proximity datawhich utilizes principles from sequential experiment design in order to interleave data generation and data analysis. The proposed activedata sampling strategy is based on the expected value of information, a concept rooting in statistical decision theory. This is considered to be an important step towards the analysis of largescale datasets, because it offers a way to overcome the inherent data sparseness of proximity data.