Goto

Collaborating Authors

 dempster


Resolving Zadehs Paradox Axiomatic Possibility Theory as a Foundation for Reliable Artificial Intelligence

Oleksii, Bychkov, Sophia, Bychkova, Khrystyna, Lytvynchuk

arXiv.org Artificial Intelligence

This work advances and substantiates the thesis that the resolution of this crisis lies in the domain of possibility theory, specifically in the axiomatic approach developed in Bychkovs article. Unlike numerous attempts to fix Dempster rule, this approach builds from scratch a logically consistent and mathematically rigorous foundation for working with uncertainty, using the dualistic apparatus of possibility and necessity measures. The aim of this work is to demonstrate that possibility theory is not merely an alternative, but provides a fundamental resolution to DST paradoxes. A comparative analysis of three paradigms will be conducted probabilistic, evidential, and possibilistic. Using a classic medical diagnostic dilemma as an example, it will be shown how possibility theory allows for correct processing of contradictory data, avoiding the logical traps of DST and bringing formal reasoning closer to the logic of natural intelligence.


FNBT: Full Negation Belief Transformation for Open-World Information Fusion Based on Dempster-Shafer Theory of Evidence

He, Meishen, Ma, Wenjun, Wang, Jiao, Yue, Huijun, Fan, Xiaoma

arXiv.org Artificial Intelligence

The Dempster-Shafer theory of evidence has been widely applied in the field of information fusion under uncertainty. Most existing research focuses on combining evidence within the same frame of discernment. However, in real-world scenarios, trained algorithms or data often originate from different regions or organizations, where data silos are prevalent. As a result, using different data sources or models to generate basic probability assignments may lead to heterogeneous frames, for which traditional fusion methods often yield unsatisfactory results. To address this challenge, this study proposes an open-world information fusion method, termed Full Negation Belief Transformation (FNBT), based on the Dempster-Shafer theory. More specially, a criterion is introduced to determine whether a given fusion task belongs to the open-world setting. Then, by extending the frames, the method can accommodate elements from heterogeneous frames. Finally, a full negation mechanism is employed to transform the mass functions, so that existing combination rules can be applied to the transformed mass functions for such information fusion. Theoretically, the proposed method satisfies three desirable properties, which are formally proven: mass function invariance, heritability, and essential conflict elimination. Empirically, FNBT demonstrates superior performance in pattern classification tasks on real-world datasets and successfully resolves Zadeh's counterexample, thereby validating its practical effectiveness.


Reasoning with random sets: An agenda for the future

Cuzzolin, Fabio

arXiv.org Artificial Intelligence

The theory of belief functions [162, 67] is a modelling language for representing and combining elementary items of evidence, which do not necessarily come in the form of sharp statements, with the goal of maintaining a mathematical representation of an agent's beliefs about those aspects of the world which the agent is unable to predict with reasonable certainty. While arguably a more appropriate mathematical description of uncertainty than classical probability theory, for the reasons we have thoroughly explored in [50], the theory of evidence is relatively simple to understand and implement, and does not require one to abandon the notion of an event, as is the case, for instance, for Walley's imprecise probability theory [193]. It is grounded in the beautiful mathematics of random sets, and exhibits strong relationships with many other theories of uncertainty. As mathematical objects, belief functions have fascinating properties in terms of their geometry, algebra [207] and combinatorics. Despite initial concerns about the computational complexity of a naive implementation of the theory of evidence, evidential reasoning can actually be implemented on large sample spaces [156] and in situations involving the combination of numerous pieces of evidence [74]. Elementary items of evidence often induce simple belief functions, which can be combined very efficiently with complexity O(n + 1).


A Dempster-Shafer approach to trustworthy AI with application to fetal brain MRI segmentation

Fidon, Lucas, Aertsen, Michael, Kofler, Florian, Bink, Andrea, David, Anna L., Deprest, Thomas, Emam, Doaa, Guffens, Frédéric, Jakab, András, Kasprian, Gregor, Kienast, Patric, Melbourne, Andrew, Menze, Bjoern, Mufti, Nada, Pogledic, Ivana, Prayer, Daniela, Stuempflen, Marlene, Van Elslander, Esther, Ourselin, Sébastien, Deprest, Jan, Vercauteren, Tom

arXiv.org Artificial Intelligence

Deep learning models for medical image segmentation can fail unexpectedly and spectacularly for pathological cases and images acquired at different centers than training images, with labeling errors that violate expert knowledge. Such errors undermine the trustworthiness of deep learning models for medical image segmentation. Mechanisms for detecting and correcting such failures are essential for safely translating this technology into clinics and are likely to be a requirement of future regulations on artificial intelligence (AI). In this work, we propose a trustworthy AI theoretical framework and a practical system that can augment any backbone AI system using a fallback method and a fail-safe mechanism based on Dempster-Shafer theory. Our approach relies on an actionable definition of trustworthy AI. Our method automatically discards the voxel-level labeling predicted by the backbone AI that violate expert knowledge and relies on a fallback for those voxels. We demonstrate the effectiveness of the proposed trustworthy AI approach on the largest reported annotated dataset of fetal MRI consisting of 540 manually annotated fetal brain 3D T2w MRIs from 13 centers. Our trustworthy AI method improves the robustness of a state-of-the-art backbone AI for fetal brain MRIs acquired across various centers and for fetuses with various brain abnormalities.


Combining Predictions under Uncertainty: The Case of Random Decision Trees

Busch, Florian, Kulessa, Moritz, Mencía, Eneldo Loza, Blockeel, Hendrik

arXiv.org Artificial Intelligence

A common approach to aggregate classification estimates in an ensemble of decision trees is to either use voting or to average the probabilities for each class. The latter takes uncertainty into account, but not the reliability of the uncertainty estimates (so to say, the "uncertainty about the uncertainty"). More generally, much remains unknown about how to best combine probabilistic estimates from multiple sources. In this paper, we investigate a number of alternative prediction methods. Our methods are inspired by the theories of probability, belief functions and reliable classification, as well as a principle that we call evidence accumulation. Our experiments on a variety of data sets are based on random decision trees which guarantees a high diversity in the predictions to be combined. Somewhat unexpectedly, we found that taking the average over the probabilities is actually hard to beat. However, evidence accumulation showed consistently better results on all but very small leafs.


An Evidential Neural Network Model for Regression Based on Random Fuzzy Numbers

Denoeux, Thierry

arXiv.org Artificial Intelligence

We introduce a distance-based neural network model for regression, in which prediction uncertainty is quantified by a belief function on the real line. The model interprets the distances of the input vector to prototypes as pieces of evidence represented by Gaussian random fuzzy numbers (GRFN's) and combined by the generalized product intersection rule, an operator that extends Dempster's rule to random fuzzy sets. The network output is a GRFN that can be summarized by three numbers characterizing the most plausible predicted value, variability around this value, and epistemic uncertainty. Experiments with real datasets demonstrate the very good performance of the method as compared to state-of-the-art evidential and statistical learning algorithms.


Fusion of evidential CNN classifiers for image classification

Tong, Zheng, Xu, Philippe, Denoeux, Thierry

arXiv.org Artificial Intelligence

We propose an information-fusion approach based on belief functions to combine convolutional neural networks. In this approach, several pre-trained DS-based CNN architectures extract features from input images and convert them into mass functions on different frames of discernment. A fusion module then aggregates these mass functions using Dempster's rule. An end-to-end learning procedure allows us to fine-tune the overall architecture using a learning set with soft labels, which further improves the classification performance. The effectiveness of this approach is demonstrated experimentally using three benchmark databases.


A geometric approach to conditioning belief functions

Cuzzolin, Fabio

arXiv.org Artificial Intelligence

Conditioning is crucial in applied science when inference involving time series is involved. Belief calculus is an effective way of handling such inference in the presence of epistemic uncertainty -- unfortunately, different approaches to conditioning in the belief function framework have been proposed in the past, leaving the matter somewhat unsettled. Inspired by the geometric approach to uncertainty, in this paper we propose an approach to the conditioning of belief functions based on geometrically projecting them onto the simplex associated with the conditioning event in the space of all belief functions. We show here that such a geometric approach to conditioning often produces simple results with straightforward interpretations in terms of degrees of belief. This raises the question of whether classical approaches, such as for instance Dempster's conditioning, can also be reduced to some form of distance minimisation in a suitable space. The study of families of combination rules generated by (geometric) conditioning rules appears to be the natural prosecution of the presented research.


Uncertainty measures: The big picture

Cuzzolin, Fabio

arXiv.org Artificial Intelligence

Probability theory is far from being the most general mathematical theory of uncertainty. A number of arguments point at its inability to describe second-order ('Knightian') uncertainty. In response, a wide array of theories of uncertainty have been proposed, many of them generalisations of classical probability. As we show here, such frameworks can be organised into clusters sharing a common rationale, exhibit complex links, and are characterised by different levels of generality. Our goal is a critical appraisal of the current landscape in uncertainty theory.


Combination of interval-valued belief structures based on belief entropy

Qin, Miao, Tang, Yongchuan

arXiv.org Artificial Intelligence

Its application involves a wide range of area including expert systems[3][4][5], information fusion[6], pattern classfication[7][8][9], risk evaluation [10,11] [12], image recognition [13], classification[14,15] and data mining [16] etc. The original DS theory requires deterministic belie degrees and belief structures. However, in practical situations, evidence coming from multiple sources may be influenced by unexpected extraneous factors. The lack of information, linguistic ambiguity or vagueness and cognitive bias all contribute to the uncertain evidence obtained in practical situations. For example, during risk assessment, expert may be unable to provide a precise assessment if he/she is not 100% sure.