Goto

Collaborating Authors

bel


Basic concepts, definitions, and methods in D number theory

arXiv.org Artificial Intelligence

Although DST has many advantages in representing and dealing with uncertainty, but it is limited by some hypotheses and constraints that are hardly satisfied in some situation [3-6]. There are two main aspects. First, in DST a frame of discernment (FOD) must be composed of mutually exclusive elements, which is called the FOD's exclusiveness hypothesis. Second, in DST the sum of basic probabilities or belief m(.) in a basic probability assignment (BPA) must be 1 (or basic probabilities can not be assigned to elements outside the FOD), which is called the BPA's completeness constraint. To overcome the above-mentioned limitations in DST, a new generalization of DST, called D number theory (DNT), has been proposed in recently [7, 8] for the fusion of uncertain information with non-exclusiveness and incompleteness. The theory of DNT stems from the concept of D numbers [9-16], and aims to build a more sophisticated framework for representing and reasoning with uncertain information similar to DST from a generic setmembership perspective, in which DNT relaxes the exclusiveness constraint of elements in FOD and completeness assumption of BPA in DST.


Evidence Propagation and Consensus Formation in Noisy Environments

arXiv.org Artificial Intelligence

We study the effectiveness of consensus formation in multi-agent systems where there is both belief updating based on direct evidence and also belief combination between agents. In particular, we consider the scenario in which a population of agents collaborate on the best-of-n problem where the aim is to reach a consensus about which is the best (alternatively, true) state from amongst a set of states, each with a different quality value (or level of evidence). Agents' beliefs are represented within Dempster-Shafer theory by mass functions and we invegate the macro-level properties of four well-known belief combination operators for this multi-agent consensus formation problem: Dempster's rule, Yager's rule, Dubois & Prade's operator and the averaging operator. The convergence properties of the operators are considered and simulation experiments are conducted for different evidence rates and noise levels. Results show that a combination of updating from direct evidence and belief combination between agents results in better consensus to the best state than does evidence updating alone. We also find that in this framework the operators are robust to noise. Broadly, Dubois & Prade's operator results in better convergence to the best state. Finally, we consider how well the Dempster-Shafer approach to the best-of-n problem scales to large numbers of states.


Deep Active Localization

arXiv.org Artificial Intelligence

Active localization is the problem of generating robot actions that allow it to maximally disambiguate its pose within a reference map. Traditional approaches to this use an information-theoretic criterion for action selection and hand-crafted perceptual models. In this work we propose an end-to-end differentiable method for learning to take informative actions that is trainable entirely in simulation and then transferable to real robot hardware with zero refinement. The system is composed of two modules: a convolutional neural network for perception, and a deep reinforcement learned planning module. We introduce a multi-scale approach to the learned perceptual model since the accuracy needed to perform action selection with reinforcement learning is much less than the accuracy needed for robot control. We demonstrate that the resulting system outperforms using the traditional approach for either perception or planning. We also demonstrate our approaches robustness to different map configurations and other nuisance parameters through the use of domain randomization in training. The code is also compatible with the OpenAI gym framework, as well as the Gazebo simulator.


Factorization of Dempster-Shafer Belief Functions Based on Data

arXiv.org Artificial Intelligence

One important obstacle in applying Dempster-Shafer Theory (DST) is its relationship to frequencies. In particular, there exist serious difficulties in finding factorizations of belief functions from data. In probability theory factorizations are usually related to notion of (conditional) independence and their possibility tested accordingly. However, in DST conditional belief distributions prove to be non-proper belief functions (that is ones connected with negative "frequencies"). This makes statistical testing of potential conditional independencies practically impossible, as no coherent interpretation could be found so far for negative belief function values. In this paper a novel attempt is made to overcome this difficulty. In the proposal no conditional beliefs are calculated, but instead a new measure F is introduced within the framework of DST, closely related to conditional independence, allowing to apply conventional statistical tests for detection of dependence/independence.


Dempsterian-Shaferian Belief Network From Data

arXiv.org Artificial Intelligence

Shenoy and Shafer {Shenoy:90} demonstrated that both for Dempster-Shafer Theory and probability theory there exists a possibility to calculate efficiently marginals of joint belief distributions (by so-called local computations) provided that the joint distribution can be decomposed (factorized) into a belief network. A number of algorithms exists for decomposition of probabilistic joint belief distribution into a bayesian (belief) network from data. For example Spirtes, Glymour and Schein{Spirtes:90b} formulated a Conjecture that a direct dependence test and a head-to-head meeting test would suffice to construe bayesian network from data in such a way that Pearl's concept of d-separation {Geiger:90} applies. This paper is intended to transfer Spirtes, Glymour and Scheines {Spirtes:90b} approach onto the ground of the Dempster-Shafer Theory (DST). For this purpose, a frequentionistic interpretation of the DST developed in {Klopotek:93b} is exploited. A special notion of conditionality for DST is introduced and demonstrated to behave with respect to Pearl's d-separation {Geiger:90} much the same way as conditional probability (though some differences like non-uniqueness are evident). Based on this, an algorithm analogous to that from {Spirtes:90b} is developed. The notion of a partially oriented graph (pog) is introduced and within this graph the notion of p-d-separation is defined. If direct dependence test and head-to-head meeting test are used to orient the pog then its p-d-separation is shown to be equivalent to the Pearl's d-separation for any compatible dag.


Our products will be IoT and Artificial Intelligence driven: Anant Bajaj, Joint MD, Bajaj Electricals

#artificialintelligence

Anant Bajaj, Joint Managing Director of Bajaj Electricals believes his organization has always been light years ahead of the competition. Way back in 2003, the company created the CIDCO Kharagar electric circle when the concept of smart lighting was but a whisper. The FMEG company, as Bajaj likes to call it has been lighting up iconic city buildings and landmarks like the CST station, Worli sea face, Rajabai clock tower and Wankhede cricket stadium. Bajaj Electricals was also the lighting partners of the Indian leg of Justin Bieber's Purpose World Tour. Six years into his role as the Joint Managing Director of BEL, Anant Bajaj is all set to drive the company into the next orbit of growth.


Improved Particle Filters for Vehicle Localisation

arXiv.org Machine Learning

The ability to track a moving vehicle is of crucial importance in numerous applications. The task has often been approached by the importance sampling technique of particle filters due to its ability to model non-linear and non-Gaussian dynamics, of which a vehicle travelling on a road network is a good example. Particle filters perform poorly when observations are highly informative. In this paper, we address this problem by proposing particle filters that sample around the most recent observation. The proposal leads to an order of magnitude improvement in accuracy and efficiency over conventional particle filters, especially when observations are infrequent but low-noise.


Projection in the Epistemic Situation Calculus with Belief Conditionals

AAAI Conferences

A fundamental task in reasoning about action and change is projection, which refers to determining what holds after a number of actions have occurred. A powerful method for solving the projection problem is regression, which reduces reasoning about the future to reasoning about the initial state. In particular, regression has played an important role in the situation calculus and its epistemic extensions. Recently, a modal variant of the situation calculus was proposed, which allows an agent to revise its beliefs based on so-called belief conditionals as part of its knowledge base. In this paper, we show how regression can be extended to reduce beliefs about the future to initial beliefs in the presence of belief conditionals. Moreover, we show how any remaining belief operators can be eliminated as well, thus reducing the belief projection problem to ordinary first-order entailments.


Going Beyond Literal Command-Based Instructions: Extending Robotic Natural Language Interaction Capabilities

AAAI Conferences

The ultimate goal of human natural language interaction is to communicate intentions. However, these intentions are often not directly derivable from the semantics of an utterance (e.g., when linguistic modulations are employed to convey polite-ness, respect, and social standing). Robotic architectures withsimple command-based natural language capabilities are thus not equipped to handle more liberal, yet natural uses of linguistic communicative exchanges. In this paper, we propose novel mechanisms for inferring in-tentions from utterances and generating clarification requests that will allow robots to cope with a much wider range of task-based natural language interactions. We demonstrate the potential of these inference algorithms for natural human-robot interactions by running them as part of an integrated cognitive robotic architecture on a mobile robot in a dialogue-based instruction task.


Modifiable Combining Functions

arXiv.org Artificial Intelligence

Modifiable combining functions are a synthesis of two common approaches to combining evidence. They offer many of the advantages of these approaches and avoid some disadvantages. Because they facilitate the acquisition, representation, explanation, and modification of knowledge about combinations of evidence, they are proposed as a tool for knowledge engineers who build systems that reason under uncertainty, not as a normative theory of evidence.