Goto

Collaborating Authors

 Hughes, Charles


Reinforced Imitative Graph Representation Learning for Mobile User Profiling: An Adversarial Training Perspective

arXiv.org Artificial Intelligence

In this paper, we study the problem of mobile user profiling, which is a critical component for quantifying users' characteristics in the human mobility modeling pipeline. Human mobility is a sequential decision-making process dependent on the users' dynamic interests. With accurate user profiles, the predictive model can perfectly reproduce users' mobility trajectories. In the reverse direction, once the predictive model can imitate users' mobility patterns, the learned user profiles are also optimal. Such intuition motivates us to propose an imitation-based mobile user profiling framework by exploiting reinforcement learning, in which the agent is trained to precisely imitate users' mobility patterns for optimal user profiles. Specifically, the proposed framework includes two modules: (1) representation module, which produces state combining user profiles and spatio-temporal context in real-time; (2) imitation module, where Deep Q-network (DQN) imitates the user behavior (action) based on the state that is produced by the representation module. However, there are two challenges in running the framework effectively. First, epsilon-greedy strategy in DQN makes use of the exploration-exploitation trade-off by randomly pick actions with the epsilon probability. Such randomness feeds back to the representation module, causing the learned user profiles unstable. To solve the problem, we propose an adversarial training strategy to guarantee the robustness of the representation module. Second, the representation module updates users' profiles in an incremental manner, requiring integrating the temporal effects of user profiles. Inspired by Long-short Term Memory (LSTM), we introduce a gated mechanism to incorporate new and old user characteristics into the user profile.


Gesturing and Embodiment in Teaching: Investigating the Nonverbal ‎Behavior of Teachers in a Virtual Rehearsal Environment ‎

AAAI Conferences

Interactive training environments typically include feedback mechanisms designed to help trainees improve their performance through either guided or self-reflection. In this context, trainees are candidate teachers who need to hone their social skills as well as other pedagogical skills for their future classroom. We chose an avatar-mediated interactive virtual training system–TeachLivE–as the basic research environment to investigate the motions and embodiment of the trainees. Using tracking sensors, and customized improvements for existing gesture recognition utilities, we created a gesture database and employed it for the implementation of our real-time gesture recognition and feedback application. We also investigated multiple methods of feedback provision, including visual and haptics. The results from the conducted user studies and user evaluation surveys indicate the positive impact of the proposed feedback applications and informed body language. In this paper, we describe the context in which the utilities have been developed, the importance of recognizing nonverbal communication in the teaching context, the means of providing automated feedback associated with nonverbal messaging, and the preliminary studies developed to inform the research.