Reasoning with limited computational resources (such as time or memory) is an important problem, in particular in cognitive embedded systems. Classical logic is usually considered inappropriate for this purpose as no guarantees regarding deadlines can be made. One of the more interesting approaches to address this problem is built around the concept of active logics. Although a step in the right direction, active logics still do not offer the ultimate solution. Our work is based on the assumption that Labeled Deductive Systems offer appropriate metamathematical methodology to study the problem. As a first step, we have shown that the LDS-based approach is strictly more expressive than active logics. We have also implemented a prototype automatic theorem prover for LDS-based systems.
A formalism that has been developed with this purpose in mind is that of active logic. Active logic combines inference rules with a constantly evolving measure of time (a'now') that itself can be referenced in those rules. These features of active logic provide mechanisms to deal with various forms of uncertainties arising in computation. A computational process P can be said to be uncertain about a proposition (or datum) if (i). it explici Uncertainties of type (i) above lend themselves to representation by probabilistic reasoning, which involves the representation of explicit confidence levels for beliefs, for example, Bayesian Networks; and somewhat less so for type (ii); and even less for types (iii) and (iv). On the other hand, a suitably configured default reasoner (nonmonotonic approaches) can represent all of these, and without special ad hoc tools; that is, active logic already has, in its timesensitive inference architecture, the means for performing default reasoning in an appropriately expressive manner. It is the purpose of this paper to elaborate on that claim; the format consists of an initial primer on uncertainty in active logic, then its current implementation (Alma/Carne), existing applications, and finally a discussion of potential future applications. However, in a Bayesian net for instance, because the probabilities have a somewhat holistic character, with the probability of a given proposition depending not just on direct but indirect connections, it looks like adding new propositions or rules (connections between nodes) will be expensive and potentially require recalculation of all connection weights. If one's world-model is well specified enough that reasoning about and interacting with the world is primarily a matter of coming to trust or distrust propositions already present in that model, a Bayesian net may provide a good engine for reasoning. However, if one's world-model is itself expected to be subject to frequent change, as novel propositions and rules are added (or removed) from one's KB, we think that a reasoning engine based on active logic will prove a better candidate. In addition, and partly because a Bayesian net deals so smoothly with inconsistent incoming data, it can operate on the assumption that incoming data is accurate and can be taken at face value. We have two related concerns about this: first, an abnormally long string of inaccurate data - as might be expected from a faulty sensor or a deliberate attempt at deceit - would obviously reduce the probability of certain beliefs that, were the data known to be inaccurate, would have retained their original strengths. It has been suggested to us that one could model inaccurate incoming information by coding child nodes that would contain information regarding the expected accuracy of the incoming information from a given evidence node.
Humor employs an essential false logic which masks the incongruity of two central meanings that are brought into overlap. Formalizing this false logic--if it exists, exists intersubjectively, and is indeed essential for humor--to a degree that is sufficient for computational detection and generation of humor has been a vexing problem for computational humor research. This paper will outline several such logics, in addition to the default of reasoning in a way that is one degree more implausibly than the most common-sense logic that can connect two meanings. The results are not least influenced by a pilot study asking participants to explain different types of jokes.
Jean-Yves Béziau (Classical Negation can be expressed by One of its Halves) (Béziau 1999) has given an example of a phenomenon that people consider as translation paradox. We elaborate on Béziau’s case, which concerns classical negation to the half of classical negation, as well as giving some relative background to this discussion. The translation in question turns out, not to deliver the new results but instead in the interests of illustrating the development of logic translation that widely discussed in various modern applications to computer science.