A Logic of Uncertain Interpretation
–arXiv.org Artificial Intelligence
We do not always know how to interpret the statements that we hear, the observations that we make, or the evidence that we gather. Traditional frameworks for reasoning about uncertainty and belief revision typically suppose that new information is presented definitively: there is no question about what was learned. The paradigm of Bayesian conditioning exemplifies this assumption: "evidence" takes the simple form of an event E, and belief revision proceeds by updating probabilities accordingly: π π( | E). In order to capture the kind of uncertainty about interpretation we wish to reason about, we change the fundamental representation of events so that the sets they correspond to are themselves variable--the "true meaning" of a statement thus becomes itself an object of uncertainty. This approach follows in the spirit of other recent work [1, 2], expanding on it along two key dimensions.
arXiv.org Artificial Intelligence
Mar-15-2025