Goto

Collaborating Authors

Modeling User Knowledge with Dynamic Bayesian Networks in Interactive Narrative Environments

AAAI Conferences

Recent years have seen a growing interest in interactive narrative systems that dynamically adapt story experiences in response to users’ actions, preferences, and goals. However, relatively little empirical work has investigated runtime models of user knowledge for informing interactive narrative adaptations. User knowledge about plot scenarios, story environments, and interaction strategies is critical in a range of interactive narrative contexts, such as mystery and detective genre stories, as well as narrative scenarios for education and training. This paper proposes a dynamic Bayesian network approach for modeling user knowledge in interactive narrative environments. A preliminary version of the model has been implemented for the Crystal Island interactive narrative-centered learning environment. Results from an initial empirical evaluation suggest several future directions for the design and evaluation of user knowledge models for guiding interactive narrative generation and adaptation.


ModelDiff: Testing-Based DNN Similarity Comparison for Model Reuse Detection

arXiv.org Artificial Intelligence

The knowledge of a deep learning model may be transferred to a student model, leading to intellectual property infringement or vulnerability propagation. Detecting such knowledge reuse is nontrivial because the suspect models may not be white-box accessible and/or may serve different tasks. In this paper, we propose ModelDiff, a testing-based approach to deep learning model similarity comparison. Instead of directly comparing the weights, activations, or outputs of two models, we compare their behavioral patterns on the same set of test inputs. Specifically, the behavioral pattern of a model is represented as a decision distance vector (DDV), in which each element is the distance between the model's reactions to a pair of inputs. The knowledge similarity between two models is measured with the cosine similarity between their DDVs. To evaluate ModelDiff, we created a benchmark that contains 144 pairs of models that cover most popular model reuse methods, including transfer learning, model compression, and model stealing. Our method achieved 91.7% correctness on the benchmark, which demonstrates the effectiveness of using ModelDiff for model reuse detection. A study on mobile deep learning apps has shown the feasibility of ModelDiff on real-world models.


Decision Trees, Random Forests, AdaBoost & XGBoost in Python

#artificialintelligence

In this section we will learn - What does Machine Learning mean. What are the meanings or different terms associated with machine learning? You will see some examples so that you understand what machine learning actually is. It also contains steps involved in building a machine learning model, not just linear models, any machine learning model.


Building of a Corporate Memory for Traffic-Accident Analysis

AI Magazine

This article presents an experiment of expertise capitalization in road traffic-accident analysis. We study the integration of models of expertise from different members of an organization into a coherent corporate expertise model. We present our elicitation protocol and the generic models and tools we exploited for knowledge modeling in this context of multiple experts. We compare the knowledge models obtained for seven experts in accidentology and their representation through conceptual graphs. Finally, we discuss the results of our experiment from a knowledge capitalization viewpoint.


Interpreting Deep Knowledge Tracing Model on EdNet Dataset

arXiv.org Artificial Intelligence

With more deep learning techniques being introduced into the knowledge tracing domain, the interpretability issue of the knowledge tracing models has aroused researchers' attention. Our previous study(Lu et al. 2020) on building and interpreting the KT model mainly adopts the ASSISTment dataset(Feng, Heffernan, and Koedinger 2009),, whose size is relatively small. In this work, we perform the similar tasks but on a large and newly available dataset, called EdNet(Choi et al. 2020). The preliminary experiment results show the effectiveness of the interpreting techniques, while more questions and tasks are worthy to be further explored and accomplished.