Goto

Collaborating Authors

 Meera, Ajith Anil


Confidence-Aware Decision-Making and Control for Tool Selection

arXiv.org Artificial Intelligence

Self-reflecting about our performance (e.g., how confident we are) before doing a task is essential for decision making, such as selecting the most suitable tool or choosing the best route to drive. While this form of awareness -- thinking about our performance or metacognitive performance -- is well-known in humans, robots still lack this cognitive ability. This reflective monitoring can enhance their embodied decision power, robustness and safety. Here, we take a step in this direction by introducing a mathematical framework that allows robots to use their control self-confidence to make better-informed decisions. We derive a mathematical closed-form expression for control confidence for dynamic systems (i.e., the posterior inverse covariance of the control action). This control confidence seamlessly integrates within an objective function for decision making, that balances the: i) performance for task completion, ii) control effort, and iii) self-confidence. To evaluate our theoretical account, we framed the decision-making within the tool selection problem, where the agent has to select the best robot arm for a particular control task. The statistical analysis of the numerical simulations with randomized 2DOF arms shows that using control confidence during tool selection improves both real task performance, and the reliability of the tool for performance under unmodelled perturbations (e.g., external forces). Furthermore, our results indicate that control confidence is an early indicator of performance and thus, it can be used as a heuristic for making decisions when computation power is restricted or decision-making is intractable. Overall, we show the advantages of using confidence-aware decision-making and control scheme for dynamic systems.


Adaptive Noise Covariance Estimation under Colored Noise using Dynamic Expectation Maximization

arXiv.org Artificial Intelligence

A wide variety of NCM estimation methods have been proposed within the control community [1]. These methods Identifying the noise associated with a process, i.e., estimating can be classified into two categories: i) feedback free the Noise Covariance Matrix (NCM) is crucial for methods where the estimation is done by processing the state estimation and control of a dynamic system [1]. An entire data sequence offline and ii) feedback methods where incorrect NCM results in suboptimal gains (e.g., Kalman estimation is done online and. The feedback free methods are gain), significantly decreasing the quality of state estimation of two types: i) the correlation methods that are based on the and tracking. Hence, accurate NCM estimation has a wide analysis of the measurement error sequence, such as Indirect scope of applications that include robotics, signal processing, Correlation (ICM) [9], Input-Output Correlation (IOCM) fault detection, optimal controller design, system identification, [10], Weighted Correlation (WCM) [11], Measurement Average etc. However, most of the NCM estimation algorithms Correlation (MACM) [12], Direct Correlation (DCM) assume a white noise condition, which may not be true in [13] and Measurement Difference Correlation (MDCM) [14], practice. In many real-world applications the noise is colored and ii) the Maximum-Likelihood Methods (MLM) [15] that (e.g., there are temporal autocorrelations). This makes NCM maximises the likelihood function over the data.


Active Inference in Robotics and Artificial Agents: Survey and Challenges

arXiv.org Artificial Intelligence

Active inference is a mathematical framework which originated in computational neuroscience as a theory of how the brain implements action, perception and learning. Recently, it has been shown to be a promising approach to the problems of state-estimation and control under uncertainty, as well as a foundation for the construction of goal-driven behaviours in robotics and artificial agents in general. Here, we review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning; describing current achievements with a particular focus on robotics. We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness. Furthermore, we connect this approach with other frameworks and discuss its expected benefits and challenges: a unified framework with functional biological plausibility using variational Bayesian inference.