Modeling Temporally Dynamic Environments for Persistent Autonomous Agents

AAAI Conferences

This paper explores how an autonomous agent can model dynamic environments and use that knowledge to improve its behavior. This capability is of particular importance for persistent agents, or long-term autonomy. Inspiration is drawn from circadian rhythms in nature, which drive periodic behavior in many organisms. In our approach, the chemical oscillators from nature are replaced with methods from time series analysis designed for forecasting complex season patterns. This model is incorporated into a behavior-based architecture as an advanced-percept, providing future estimates of the environment rather than current measurements. A simulated application of a janitor robot working in an environment with heavy pedestrian traffic was created as a testbed. Experimental data used real world pedestrian traffic counts and showed an agent using online forecasting of future traffic outperformed both a reactive, sensor-based, strategy and a strategy with a deterministic schedule.


How You Act Tells a Lot: Privacy-Leakage Attack on Deep Reinforcement Learning

arXiv.org Machine Learning

Machine learning has been widely applied to various applications, some of which involve training with privacy-sensitive data. A modest number of data breaches have been studied, including credit card information in natural language data and identities from face dataset. However, most of these studies focus on supervised learning models. As deep reinforcement learning (DRL) has been deployed in a number of real-world systems, such as indoor robot navigation, whether trained DRL policies can leak private information requires in-depth study. To explore such privacy breaches in general, we mainly propose two methods: environment dynamics search via genetic algorithm and candidate inference based on shadow policies. We conduct extensive experiments to demonstrate such privacy vulnerabilities in DRL under various settings. We leverage the proposed algorithms to infer floor plans from some trained Grid World navigation DRL agents with LiDAR perception. The proposed algorithm can correctly infer most of the floor plans and reaches an average recovery rate of 95.83% using policy gradient trained agents. In addition, we are able to recover the robot configuration in continuous control environments and an autonomous driving simulator with high accuracy. To the best of our knowledge, this is the first work to investigate privacy leakage in DRL settings and we show that DRL-based agents do potentially leak privacy-sensitive information from the trained policies.


A Survey of Research in Distributed, Continual Planning

AI Magazine

Planning and executing the resulting plans in a dynamic environment implies a continual approach in which planning and execution are interleaved, uncertainty in the current and projected world state is recognized and handled appropriately, and replanning can be performed when the situation changes or planned actions fail. Furthermore, complex planning and execution problems may require multiple computational agents and human planners to collaborate on a solution. In this article, we describe a new paradigm for planning in complex, dynamic environments, which we term distributed, continual planning (DCP). We give a historical overview of research leading to the current state of the art in DCP and describe research in distributed and continual planning.


WS97-08-012.pdf

AAAI Conferences

Abstraction and aggregation are useful for increasing speed of inference in and easing knowledge acquisition of belief networks. This paper presents previous research on belief network abstraction and aggregation, discusses its hmitations, and outlines directions for future research. Introduction Abstraction and aggregation have been used in several areas in artificial intelligence, including in planning, model-based reasoning, and reasoning under uncertainty. For reasoning under uncertainty, the framework of decision theory and in particular the notion of influence diagram (or decision diagram) has proven fruitful. An influence diagram is essentially a graph, where nodes are chance nodes, decision (or action) nodes, utility (or Value) nodes.