Country
Context-Dependent Similarity
Attribute weighting and differential weighting, two major mechanisms for computing context-dependent similarity or dissimilarity measures are studied and compared. A dissimilarity measure based on subset size in the context is proposed and its metrization and application are given. It is also shown that while all attribute weighting dissimilarity measures are metrics differential weighting dissimilarity measures are usually non-metric.
Efficient Parallel Estimation for Markov Random Fields
Swain, Michael J., Wixson, Lambert E., Chou, Paul B.
We present a new, deterministic, distributed MAP estimation algorithm for Markov Random Fields called Local Highest Confidence First (Local HCF). The algorithm has been applied to segmentation problems in computer vision and its performance compared with stochastic algorithms. The experiments show that Local HCF finds better estimates than stochastic algorithms with much less computation.
Reasoning About Beliefs and Actions Under Computational Resource Constraints
Although many investigators affirm a desire to build reasoning systems that behave consistently with the axiomatic basis defined by probability theory and utility theory, limited resources for engineering and computation can make a complete normative analysis impossible. We attempt to move discussion beyond the debate over the scope of problems that can be handled effectively to cases where it is clear that there are insufficient computational resources to perform an analysis deemed as complete. Under these conditions, we stress the importance of considering the expected costs and benefits of applying alternative approximation procedures and heuristics for computation and knowledge acquisition. We discuss how knowledge about the structure of user utility can be used to control value tradeoffs for tailoring inference to alternative contexts. We address the notion of real-time rationality, focusing on the application of knowledge about the expected timewise-refinement abilities of reasoning strategies to balance the benefits of additional computation with the costs of acting with a partial result. We discuss the benefits of applying decision theory to control the solution of difficult problems given limitations and uncertainty in reasoning resources.
Imprecise Meanings as a Cause of Uncertainty in Medical Knowledge-Based Systems
There has been a considerable amount of work on uncertainty in knowledge-based systems. This work has generally been concerned with uncertainty arising from the strength of inferences and the weight of evidence. In this paper we discuss another type of uncertainty: that which is due to imprecision in the underlying primitives used to represent the knowledge of the system. In particular, a given word may denote many similar but not identical entities. Such words are said to be lexically imprecise. Lexical imprecision has caused widespread problems in many areas. Unless this phenomenon is recognized and appropriately handled, it can degrade the performance of knowledge-based systems. In particular, it can lead to difficulties with the user interface, and with the inferencing processes of these systems. Some techniques are suggested for coping with this phenomenon.
Towards a General-Purpose Belief Maintenance System
There currently exists a gap between the theories proposed by the probability and uncertainty and the needs of Artificial Intelligence research. These theories primarily address the needs of expert systems, using knowledge structures which must be pre-compiled and remain static in structure during runtime. Many Al systems require the ability to dynamically add and remove parts of the current knowledge structure (e.g., in order to examine what the world would be like for different causal theories). This requires more flexibility than existing uncertainty systems display. In addition, many Al researchers are only interested in using "probabilities" as a means of obtaining an ordering, rather than attempting to derive an accurate probabilistic account of a situation. This indicates the need for systems which stress ease of use and don't require extensive probability information when one cannot (or doesn't wish to) provide such information. This paper attempts to help reconcile the gap between approaches to uncertainty and the needs of many AI systems by examining the control issues which arise, independent of a particular uncertainty calculus. when one tries to satisfy these needs. Truth Maintenance Systems have been used extensively in problem solving tasks to help organize a set of facts and detect inconsistencies in the believed state of the world. These systems maintain a set of true/false propositions and their associated dependencies. However, situations often arise in which we are unsure of certain facts or in which the conclusions we can draw from available information are somewhat uncertain. The non-monotonic TMS 12] was an attempt at reasoning when all the facts are not known, but it fails to take into account degrees of belief and how available evidence can combine to strengthen a particular belief. This paper addresses the problem of probabilistic reasoning as it applies to Truth Maintenance Systems. It describes a belief Maintenance System that manages a current set of beliefs in much the same way that a TMS manages a set of true/false propositions. If the system knows that belief in fact is dependent in some way upon belief in fact2, then it automatically modifies its belief in facts when new information causes a change in belief of fact2. It models the behavior of a TMS, replacing its 3-valued logic (true, false, unknown) with an infinite valued logic, in such a way as to reduce to a standard TMS if all statements are given in absolute true/false terms. Belief Maintenance Systems can, therefore, be thought of as a generalization of Truth Maintenance Systems, whose possible reasoning tasks are a superset of those for a TMS.
Using T-Norm Based Uncertainty Calculi in a Naval Situation Assessment Application
RUM (Reasoning with Uncertainty Module), is an integrated software tool based on a KEE, a frame system implemented in an object oriented language. RUM's architecture is composed of three layers: representation, inference, and control. The representation layer is based on frame-like data structures that capture the uncertainty information used in the inference layer and the uncertainty meta-information used in the control layer. The inference layer provides a selection of five T-norm based uncertainty calculi with which to perform the intersection, detachment, union, and pooling of information. The control layer uses the meta-information to select the appropriate calculus for each context and to resolve eventual ignorance or conflict in the information. This layer also provides a context mechanism that allows the system to focus on the relevant portion of the knowledge base, and an uncertain-belief revision system that incrementally updates the certainty values of well-formed formulae (wffs) in an acyclic directed deduction graph. RUM has been tested and validated in a sequence of experiments in both naval and aerial situation assessment (SA), consisting of correlating reports and tracks, locating and classifying platforms, and identifying intents and threats. An example of naval situation assessment is illustrated. The testbed environment for developing these experiments has been provided by LOTTA, a symbolic simulator implemented in Flavors. This simulator maintains time-varying situations in a multi-player antagonistic game where players must make decisions in light of uncertain and incomplete data. RUM has been used to assist one of the LOTTA players to perform the SA task.
Decision Under Uncertainty in Diagnosis
This paper describes the incorporation of uncertainty in diagnostic reasoning based on the set covering model of Reggia et. al. extended to what in the Artificial Intelligence dichotomy between deep and compiled (shallow, surface) knowledge based diagnosis may be viewed as the generic form at the compiled end of the spectrum. A major undercurrent in this is advocating the need for a strong underlying model and an integrated set of support tools for carrying such a model in order to deal with uncertainty.
Predicting The Performance of Minimax and Product in Game-Tree
The discovery that the minimax decision rule performs poorly in some games has sparked interest in possible alternatives to minimax. Until recently, the only games in which minimax was known to perform poorly were games which were mainly of theoretical interest. However, this paper reports results showing poor performance of minimax in a more common game called kalah. For the kalah games tested, a non-minimax decision rule called the product rule performs significantly better than minimax. This paper also discusses a possible way to predict whether or not minimax will perform well in a game when compared to product. A parameter called the rate of heuristic flaw (rhf) has been found to correlate positively with the. performance of product against minimax. Both analytical and experimental results are given that appear to support the predictive power of rhf.
Managing Uncertainty in Rule Based Cognitive Models
An experiment replicated and extended recent findings on psychologically realistic ways of modeling propagation of uncertainty in rule based reasoning. Within a single production rule, the antecedent evidence can be summarized by taking the maximum of disjunctively connected antecedents and the minimum of conjunctively connected antecedents. The maximum certainty factor attached to each of the rule's conclusions can be sealed down by multiplication with this summarized antecedent certainty. Heckerman's modified certainty factor technique can be used to combine certainties for common conclusions across production rules.
Updating with Belief Functions, Ordinal Conditioning Functions and Possibility Measures
This paper discusses how a measure of uncertainty representing a state of knowledge can be updated when a new information, which may be pervaded with uncertainty, becomes available. This problem is considered in various framework, namely: Shafer's evidence theory, Zadeh's possibility theory, Spohn's theory of epistemic states. In the two first cases, analogues of Jeffrey's rule of conditioning are introduced and discussed. The relations between Spohn's model and possibility theory are emphasized and Spohn's updating rule is contrasted with the Jeffrey-like rule of conditioning in possibility theory. Recent results by Shenoy on the combination of ordinal conditional functions are reinterpreted in the language of possibility theory. It is shown that Shenoy's combination rule has a well-known possibilistic counterpart.