Goto

Collaborating Authors

 kbr


Beyond prior knowledge: The predictive role of knowledge-building in Tutor Learning

Shahriar, Tasmia, Ameen, Mia, Mallavarapu, Aditi, Jiang, Shiyan, Matsuda, Noboru

arXiv.org Artificial Intelligence

When adopting the role of a teacher in learning-by-teaching environments, students often struggle to engage in knowledge-building activities, such as providing explanations and addressing misconceptions. Instead, they frequently default to knowledge-telling behaviors, where they simply dictate what they already know or what to do without deeper reflection, thereby limiting learning. Teachable agents, particularly those capable of posing persistent follow-up questions, have been shown to encourage students (tutors) to shift from knowledge-telling to knowledge-building and enhance tutor learning. Tutor learning encompasses two interrelated types of knowledge: conceptual and procedural knowledge. Research has established a bidirectional relationship between these knowledge types, where improvements in one reinforce the other. This study investigates the role of knowledge-building in mediating the bidirectional relationship between procedural and conceptual learning. Our findings revealed a stable bidirectional relationship between procedural and conceptual knowledge, with higher post-test scores observed among students who engaged in knowledge-building, regardless of their procedural and conceptual pre-test performance. This suggests that knowledge-building serves as a crucial mechanism bridging the gap between students with low prior knowledge and higher conceptual and procedural learning gain.


Kernel Bayes ' Rule

Neural Information Processing Systems

A nonparametric kernel-based method for realizing Bayes' rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mean of the posterior distribution is computed in the form of a weighted sample. The kernel Bayes' rule can be applied to a wide variety of Bayesian inference problems: we demonstrate Bayesian computation without likelihood, and filtering with a nonparametric statespace model. A consistency rate for the posterior estimate is established.


KBR To Lead R&D For UK Military Artificial Intelligence Testing - Law360

#artificialintelligence

By Nicole Rosenthal (July 14, 2022, 6:53 PM EDT) -- International cybersecurity contractor KBR announced Thursday it will lead research and development for an artificial intelligence and machine-learning project at the U.K. Ministry of Defence. Frazer-Nash Consultancy, a U.K. and Australian technology services provider that KBR acquired in October 2021, is set to work with the MOD's Defence Science and Technology Laboratory, or Dstl, as part of the three-year Autonomous Resilient Cyber Defence project, which will develop defense concepts that can be tested and evaluated against simulated military attacks. "Dstl are excited to be working with KBR and Frazer-Nash on this vanguard project, delivering cutting-edge response and recovery concept demonstrators over...


Importance Weighting Approach in Kernel Bayes' Rule

Xu, Liyuan, Chen, Yutian, Doucet, Arnaud, Gretton, Arthur

arXiv.org Machine Learning

We study a nonparametric approach to Bayesian computation via feature means, where the expectation of prior features is updated to yield expected posterior features, based on regression from kernel or neural net features of the observations. All quantities involved in the Bayesian update are learned from observed data, making the method entirely model-free. The resulting algorithm is a novel instance of a kernel Bayes' rule (KBR). Our approach is based on importance weighting, which results in superior numerical stability to the existing approach to KBR, which requires operator inversion. We show the convergence of the estimator using a novel consistency analysis on the importance weighting estimator in the infinity norm. We evaluate our KBR on challenging synthetic benchmarks, including a filtering problem with a state-space model involving high dimensional image observations. The proposed method yields uniformly better empirical performance than the existing KBR, and competitive performance with other competing methods.


Bayesian Deconditional Kernel Mean Embeddings

Hsu, Kelvin, Ramos, Fabio

arXiv.org Machine Learning

Conditional kernel mean embeddings form an attractive nonparametric framework for representing conditional means of functions, describing the observation processes for many complex models. However, the recovery of the original underlying function of interest whose conditional mean was observed is a challenging inference task. We formalize deconditional kernel mean embeddings as a solution to this inverse problem, and show that it can be naturally viewed as a nonparametric Bayes' rule. Critically, we introduce the notion of task transformed Gaussian processes and establish deconditional kernel means as their posterior predictive mean. This connection provides Bayesian interpretations and uncertainty estimates for deconditional kernel mean embeddings, explains their regularization hyperparameters, and reveals a marginal likelihood for kernel hyperparameter learning. These revelations further enable practical applications such as likelihood-free inference and learning sparse representations for big data.


The Kernel Kalman Rule — Efficient Nonparametric Inference with Recursive Least Squares

Gebhardt, Gregor H. W. (Technische Universität Darmstadt) | Kupcsik, Andras (National University of Singapore) | Neumann, Gerhard ( University of Lincoln )

AAAI Conferences

Nonparametric inference techniques provide promising tools for probabilistic reasoning in high-dimensional nonlinear systems.Most of these techniques embed distributions into reproducing kernel Hilbert spaces (RKHS) and rely on the kernel Bayes' rule (KBR) to manipulate the embeddings. However, the computational demands of the KBR scale poorly with the number of samples and the KBR often suffers from numerical instabilities. In this paper, we present the kernel Kalman rule (KKR) as an alternative to the KBR.The derivation of the KKR is based on recursive least squares, inspired by the derivation of the Kalman innovation update.We apply the KKR to filtering tasks where we use RKHS embeddings to represent the belief state, resulting in the kernel Kalman filter (KKF).We show on a nonlinear state estimation task with high dimensional observations that our approach provides a significantly improved estimation accuracy while the computational demands are significantly decreased.


Kernel Bayes' Rule

Fukumizu, Kenji, Song, Le, Gretton, Arthur

Neural Information Processing Systems

A nonparametric kernel-based method for realizing Bayes' rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mean of the posterior distribution is computed in the form of a weighted sample. The kernel Bayes' rule can be applied to a wide variety of Bayesian inference problems: we demonstrate Bayesian computation without likelihood, and filtering with a nonparametric state-space model. A consistency rate for the posterior estimate is established.


Kernel Bayes' rule

Fukumizu, Kenji, Song, Le, Gretton, Arthur

arXiv.org Machine Learning

Kernel methods have long provided powerful tools for generalizing linear statistical approaches to nonlinear settings, through an embedding of the sample to a high dimensional feature space, namely a reproducing kernel Hilbert space (RKHS) [18, 28]. Examples include support vector machines, kernel PCA, and kernel CCA, among others. In these cases, data are mapped via a canonical feature map to a reproducing kernel Hilbert space (of high or even infinite dimension), in which the linear operations that define the algorithms are implemented. The inner product between feature mappings need never be computed explicitly, but is given by a positive definite kernel function unique to the RKHS: this permits efficient computation without the need to deal explicitly with the feature representation. The mappings of individual points to a feature space may be generalized to mappings of probability measures[e.g. 3, Chapter 4]. We call such mappings the kernel means of the underlying random variables.