Goto

Collaborating Authors

Estimation & Recognition under Perspective of Random-Fuzzy Dual Interpretation of Unknown Quantity: with Demonstration of IMM Filter

arXiv.org Artificial Intelligence

This paper is to consider the problems of estimation and recognition from the perspective of sigma-max inference (probability-possibility inference), with a focus on discovering whether some of the unknown quantities involved could be more faithfully modeled as fuzzy uncertainty. Two related key issues are addressed: 1) the random-fuzzy dual interpretation of unknown quantity being estimated; 2) the principle of selecting sigma-max operator for practical problems, such as estimation and recognition. Our perspective, conceived from definitions of randomness and fuzziness, is that continuous unknown quantity involved in estimation with inaccurate prior should be more appropriately modeled as randomness and handled by sigma inference; whereas discrete unknown quantity involved in recognition with insufficient (and inaccurate) prior could be better modeled as fuzziness and handled by max inference. The philosophy was demonstrated by an updated version of the well-known interacting multiple model (IMM) filter, for which the jump Markovian System is reformulated as a hybrid uncertainty system, with continuous state evolution modeled as usual as model-conditioned stochastic system and discrete mode transitions modeled as fuzzy system by a possibility (instead of probability) transition matrix, and hypotheses mixing is conducted by using the operation of "max" instead of "sigma". For our example of maneuvering target tracking using simulated data from both a short-range fire control radar and a long-range surveillance radar, the updated IMM filter shows significant improvement over the classic IMM filter, due to its peculiarity of hard decision of system model and a faster response to the transition of discrete mode.


Precisiated Natural Language (PNL)

AI Magazine

This article is a sequel to an article titled "A New Direction in AI -- Toward a Computational Theory of Perceptions," which appeared in the Spring 2001 issue of AI Magazine (volume 22, No. 1, 73-84). The concept of precisiated natural language (PNL) was briefly introduced in that article, and PNL was employed as a basis for computation with perceptions. In what follows, the conceptual structure of PNL is described in greater detail, and PNL's role in knowledge representation, deduction, and concept definition is outlined and illustrated by examples. What should be understood is that PNL is in its initial stages of development and that the exposition that follows is an outline of the basic ideas that underlie PNL rather than a definitive theory. A natural language is basically a system for describing perceptions. Perceptions, such as perceptions of distance, height, weight, color, temperature, similarity, likelihood, relevance, and most other attributes of physical and mental objects are intrinsically imprecise, reflecting the bounded ability of sensory organs, and ultimately the brain, to resolve detail and store information. In this perspective, the imprecision of natural languages is a direct consequence of the imprecision of perceptions (Zadeh 1999, 2000). How can a natural language be precisiated -- precisiated in the sense of making it possible to treat propositions drawn from a natural language as objects of computation? This is what PNL attempts to do. In PNL, precisiation is accomplished through translation into what is termed a precisiation language. In the case of PNL, the precisiation language is the generalized-constraint language (GCL), a language whose elements are so-called generalized constraints and their combinations. What distinguishes GCL from languages such as Prolog, LISP, SQL, and, more generally, languages associated with various logical systems, for example, predicate logic, modal logic, and so on, is its much higher expressive power. The conceptual structure of PNL mirrors two fundamental facets of human cognition: (a) partiality and (b) granularity (Zadeh 1997). Partiality relates to the fact that most human concepts are not bivalent, that is, are a matter of degree. Thus, we have partial understanding, partial truth, partial possibility, partial certainty, partial similarity, and partial relevance, to cite a few examples. Similarly, granularity and granulation relate to clumping of values of attributes, forming granules with words as labels, for example, young, middle-aged, and old as labels of granules of age. Existing approaches to natural language processing are based on bivalent logic -- a logic in which shading of truth is not allowed. PNL abandons bivalence. By so doing, PNL frees itself from limitations imposed by bivalence and categoricity, and opens the door to new approaches for dealing with long-standing problems in AI and related fields (Novak 1991). At this juncture, PNL is in its initial stages of development. As it matures, PNL is likely to find a variety of applications, especially in the realms of world knowledge representation, concept definition, deduction, decision, search, and question answering.


Precisiated Natural Language (PNL)

AI Magazine

This article is a sequel to an article titled "A New Direction in AI--Toward a Computational Theory of Perceptions," which appeared in the Spring 2001 issue of AI Magazine (volume 22, No. 1, 73-84). The concept of precisiated natural language (PNL) was briefly introduced in that article, and PNL was employed as a basis for computation with perceptions. In what follows, the conceptual structure of PNL is described in greater detail, and PNL's role in knowledge representation, deduction, and concept definition is outlined and illustrated by examples. What should be understood is that PNL is in its initial stages of development and that the exposition that follows is an outline of the basic ideas that underlie PNL rather than a definitive theory. A natural language is basically a system for describing perceptions.


Inference with Possibilistic Evidence

arXiv.org Artificial Intelligence

In this paper, the concept of possibilistic evidence which is a possibility distribution as well as a body of evidence is proposed over an infinite universe of discourse. The inference with possibilistic evidence is investigated based on a unified inference framework maintaining both the compatibility of concepts and the consistency of the probability logic.


A New Direction in AI: Toward a Computational Theory of Perceptions

AI Magazine

Fast-forward (FF) was the most successful automatic planner in the Fifth International Conference on Artificial Intelligence Planning and Scheduling (AIPS '00) planning systems competition. Like the well-known hsp system, FF relies on forward search in the state space, guided by a heuristic that estimates goal distances by ignoring delete lists. It differs from HSP in a number of important details. This article describes the algorithmic techniques used in FF in comparison to hsp and evaluates their benefits in terms of run-time and solution-length behavior. Humans have a remarkable capability to perform a wide variety of physical and mental tasks without any measurements and any computations. Familiar examples are parking a car, driving in city traffic, playing golf, cooking a meal, and summarizing a story. In performing such tasks, humans use perceptions of time, direction, speed, shape, possibility, likelihood, truth, and other attributes of physical and mental objects. Reflecting the bounded ability of the human brain to resolve detail, perceptions are intrinsically imprecise. In more concrete terms, perceptions are f-granular, meaning that (1) the boundaries of perceived classes are unsharp and (2) the values of attributes are granulated, with a granule being a clump of values (points, objects) drawn together by indistinguishability, similarity, proximity, and function. For example, the granules of age might be labeled very young, young, middle aged, old, very old, and so on. F-granularity of perceptions puts them well beyond the reach of traditional methods of analysis based on predicate logic or probability theory. The computational theory of perceptions (CTP), which is outlined in this article, adds to the armamentarium of AI a capability to compute and reason with perception-based information. The point of departure in CTP is the assumption that perceptions are described by propositions drawn from a natural language; for example, it is unlikely that there will be a significant increase in the price of oil in the near future. In CTP, a proposition, p, is viewed as an answer to a question, and the meaning of p is represented as a generalized constraint. To compute with perceptions, their descriptors are translated into what is called the generalized constraint language (GCL). Then, goal-directed constraint propagation is utilized to answer a given query. A concept that plays a key role in CTP is that of precisiated natural language (PNL). The computational theory of perceptions suggests a new direction in AI -- a direction that might enhance the ability of AI to deal with realworld problems in which decision-relevant information is a mixture of measurements and perceptions. What is not widely recognized is that many important problems in AI fall into this category.