multi-view decision process
Multi-View Decision Processes: The Helper-AI Problem
We consider a two-player sequential game in which agents have the same reward function but may disagree on the transition probabilities of an underlying Markovian model of the world. By committing to play a specific policy, the agent with the correct model can steer the behavior of the other agent, and seek to improve utility. We model this setting as a multi-view decision process, which we use to formally analyze the positive effect of steering policies. Furthermore, we develop an algorithm for computing the agents' achievable joint policy, and we experimentally show that it can lead to a large utility increase when the agents' models diverge.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
Reviews: Multi-View Decision Processes: The Helper-AI Problem
The paper introduces multi-view decision processes (MVDP) as a cooperative game between two agents, where one agent may have an imperfect model of the transition probabilities of the other agent. Generally, the work is motivated well, the problem is interesting and clearly relevant to AI and the NIPS community. The theoretical results are nicely complemented by experiments. Presentation is mostly smooth: the main goals are consistent with the narrative throughout the paper, but the some details from the experiments seem less clear (seem points below.) Some points are confusing and need further clarification: 1.
Multi-View Decision Processes: The Helper-AI Problem
Christos Dimitrakakis, David C. Parkes, Goran Radanovic, Paul Tylkin
We consider a two-player sequential game in which agents have the same reward function but may disagree on the transition probabilities of an underlying Markovian model of the world. By committing to play a specific policy, the agent with the correct model can steer the behavior of the other agent, and seek to improve utility. We model this setting as a multi-view decision process, which we use to formally analyze the positive effect of steering policies. Furthermore, we develop an algorithm for computing the agents' achievable joint policy, and we experimentally show that it can lead to a large utility increase when the agents' models diverge.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
Multi-View Decision Processes: The Helper-AI Problem
Dimitrakakis, Christos, Parkes, David C., Radanovic, Goran, Tylkin, Paul
We consider a two-player sequential game in which agents have the same reward function but may disagree on the transition probabilities of an underlying Markovian model of the world. By committing to play a specific policy, the agent with the correct model can steer the behavior of the other agent, and seek to improve utility. We model this setting as a multi-view decision process, which we use to formally analyze the positive effect of steering policies. Furthermore, we develop an algorithm for computing the agents' achievable joint policy, and we experimentally show that it can lead to a large utility increase when the agents' models diverge. Papers published at the Neural Information Processing Systems Conference.
Multi-View Decision Processes: The Helper-AI Problem
Dimitrakakis, Christos, Parkes, David C., Radanovic, Goran, Tylkin, Paul
We consider a two-player sequential game in which agents have the same reward function but may disagree on the transition probabilities of an underlying Markovian model of the world. By committing to play a specific policy, the agent with the correct model can steer the behavior of the other agent, and seek to improve utility. We model this setting as a multi-view decision process, which we use to formally analyze the positive effect of steering policies. Furthermore, we develop an algorithm for computing the agents' achievable joint policy, and we experimentally show that it can lead to a large utility increase when the agents' models diverge.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)