Goto

Collaborating Authors

 Zhang, Zhitian


Predicting Long-Term Human Behaviors in Discrete Representations via Physics-Guided Diffusion

arXiv.org Artificial Intelligence

Long-term human trajectory prediction is a challenging yet critical task in robotics and autonomous systems. Prior work that studied how to predict accurate short-term human trajectories with only unimodal features often failed in long-term prediction. Reinforcement learning provides a good solution for learning human long-term behaviors but can suffer from challenges in data efficiency and optimization. In this work, we propose a long-term human trajectory forecasting framework that leverages a guided diffusion model to generate diverse long-term human behaviors in a high-level latent action space, obtained via a hierarchical action quantization scheme using a VQ-VAE to discretize continuous trajectories and the available context. The latent actions are predicted by our guided diffusion model, which uses physics-inspired guidance at test time to constrain generated multimodal action distributions. Specifically, we use reachability analysis during the reverse denoising process to guide the diffusion steps toward physically feasible latent actions. We evaluate our framework on two publicly available human trajectory forecasting datasets: SFU-Store-Nav and JRDB, and extensive experimental results show that our framework achieves superior performance in long-term human trajectory forecasting.


Good Things Come in Trees: Emotion and Context Aware Behaviour Trees for Ethical Robotic Decision-Making

arXiv.org Artificial Intelligence

Emotions guide our decision making process and yet have been little explored in practical ethical decision making scenarios. In this challenge, we explore emotions and how they can influence ethical decision making in a home robot context: which fetch requests should a robot execute, and why or why not? We discuss, in particular, two aspects of emotion: (1) somatic markers: objects to be retrieved are tagged as negative (dangerous, e.g. knives or mind-altering, e.g. medicine with overdose potential), providing a quick heuristic for where to focus attention to avoid the classic Frame Problem of artificial intelligence, (2) emotion inference: users' valence and arousal levels are taken into account in defining how and when a robot should respond to a human's requests, e.g. to carefully consider giving dangerous items to users experiencing intense emotions. Our emotion-based approach builds a foundation for the primary consideration of Safety, and is complemented by policies that support overriding based on Context (e.g. age of user, allergies) and Privacy (e.g. administrator settings). Transparency is another key aspect of our solution. Our solution is defined using behaviour trees, towards an implementable design that can provide reasoning information in real-time.


Developing a Dataset for Personal Attacks and Other Indicators of Biases

AAAI Conferences

Online argumentation, particularly on popular public discussion boards and social media, is rich with fallacy-and bias-prone arguments. An artificially intelligent tool capable of identifying potential biases in online argumentation might be able to address this growing problem, but what would it take to develop such a tool? In this paper, we attempt to answer this question by carefully defining both argumentative biases and fallacies, and laying out some guidelines for automated bias detection. After laying out a roadmap and identifying current bottlenecks, we take some initial steps towards relieving these limitations through the creation of a dataset of personal and ad hominem attacks in comments. Our progress in this direction is summarized.