Sufficient Markov Decision Processes with Alternating Deep Neural Networks
Wang, Longshaokan, Laber, Eric B., Witkiewitz, Katie
Advances in mobile computing technologies have made it possible to monitor and apply data-driven interventions across complex systems in real time. Markov decision processes (MDPs) are the primary model for sequential decision problems with a large or indefinite time horizon. Choosing a representation of the underlying decision process that is both Markov and low-dimensional is non-trivial. We propose a method for constructing a low-dimensional representation of the original decision process for which: 1. the MDP model holds; 2. a decision strategy that maximizes mean utility when applied to the low-dimensional representation also maximizes mean utility when applied to the original process. We use a deep neural network to define a class of potential process representations and estimate the process of lowest dimension within this class. The method is illustrated using data from a mobile study on heavy drinking and smoking among college students.
Mar-16-2018, 19:00:00 GMT
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States
- New Mexico (0.14)
- North Carolina (0.14)
- Texas (0.14)
- Canada > Ontario
- North America
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.67)
- Technology: