hierarchical decision
Hierarchical Decision Making by Generating and Following Natural Language Instructions
We explore using latent natural language instructions as an expressive and compositional representation of complex actions for hierarchical decision making. Rather than directly selecting micro-actions, our agent first generates a latent plan in natural language, which is then executed by a separate model. We introduce a challenging real-time strategy game environment in which the actions of a large number of units must be coordinated across long time scales. We gather a dataset of 76 thousand pairs of instructions and executions from human play, and train instructor and executor models. Experiments show that models using natural language as a latent variable significantly outperform models that directly imitate human actions. The compositional structure of language proves crucial to its effectiveness for action representation. We also release our code, models and data.
- Leisure & Entertainment > Games > Computer Games (0.61)
- Information Technology > Software (0.61)
Reviews: Hierarchical Decision Making by Generating and Following Natural Language Instructions
Post Rebuttal: Thank you for your response. I do see the advantages you listed to support the choice of language over programs. Overall, I feel the general direction of using language for intermediate supervision is really interesting and worthy of further study. This paper could be significantly improved however in some regards. For example: - Authors should study the generated language to test it for compositionality (as other reviewers have pointed out).
Reviews: Hierarchical Decision Making by Generating and Following Natural Language Instructions
After author feedback and reviewer discussion, this paper received diverging final ratings of 7 (R1), 3 (R2) and 4 (R3). Given this lack of consensus, the AC read the paper, reviews, feedback, and discussion closely, and in this case decided to accept. Of the three reviews, R2's rating was the most negative. R2's review noted that the paper was well-written, and praised the inclusion of challenging linguistic phenomena in the dataset, while raising concerns with the characterization of the language as'latent' and requesting additional details (e.g. In the context of the review itself, R2's rating (3, 'clear reject') appears to be calibrated to a relatively strict standard, possibly stricter than some other NeurIPS reviewers.
Hierarchical Decision Making by Generating and Following Natural Language Instructions
We explore using latent natural language instructions as an expressive and compositional representation of complex actions for hierarchical decision making. Rather than directly selecting micro-actions, our agent first generates a latent plan in natural language, which is then executed by a separate model. We introduce a challenging real-time strategy game environment in which the actions of a large number of units must be coordinated across long time scales. We gather a dataset of 76 thousand pairs of instructions and executions from human play, and train instructor and executor models. Experiments show that models using natural language as a latent variable significantly outperform models that directly imitate human actions.
- Leisure & Entertainment > Games > Computer Games (0.67)
- Information Technology > Software (0.67)
A multi-agent model of hierarchical decision dynamics
One key feature is that the "decision" process is split into three distinct steps: information gathering, judgement formation, and action. Notably, any agent's judgement about a best Decision making has always been a potentially complex action is not necessarily the same as the action taken, since problem, and arguably never more so when there are many (e.g.) the preferred action might be altered - or even overridden competing decision types to be made, when they apply to different - by the judgements of higher level agents. The other scopes and arenas, when outcomes may be uncertain, key feature is that agents share only their judgements, and not and when there are many actors - with different levels of authority their observations about the world, or their actions.
Hierarchical Decision Making by Generating and Following Natural Language Instructions
Hu, Hengyuan, Yarats, Denis, Gong, Qucheng, Tian, Yuandong, Lewis, Mike
We explore using latent natural language instructions as an expressive and compositional representation of complex actions for hierarchical decision making. Rather than directly selecting micro-actions, our agent first generates a latent plan in natural language, which is then executed by a separate model. We introduce a challenging real-time strategy game environment in which the actions of a large number of units must be coordinated across long time scales. We gather a dataset of 76 thousand pairs of instructions and executions from human play, and train instructor and executor models. Experiments show that models using natural language as a latent variable significantly outperform models that directly imitate human actions.
- Leisure & Entertainment > Games > Computer Games (0.67)
- Information Technology > Software (0.67)
A Guide to Decision Trees for Machine Learning and Data Science
Decision Trees are a class of very powerful Machine Learning model cable of achieving high accuracy in many tasks while being highly interpretable. What makes decision trees special in the realm of ML models is really their clarity of information representation. The "knowledge" learned by a decision tree through training is directly formulated into a hierarchical structure. This structure holds and displays the knowledge in such a way that it can easily be understood, even by non-experts. You've probably used a decision tree before to make a decision in your own life. Take for example the decision about what activity you should do this weekend.
A Guide to Decision Trees for Machine Learning and Data Science
Decision Trees are a class of very powerful Machine Learning model cable of achieving high accuracy in many tasks while being highly interpretable. What makes decision trees special in the realm of ML models is really their clarity of information representation. The "knowledge" learned by a decision tree through training is directly formulated into a hierarchical structure. This structure holds and displays the knowledge in such a way that it can easily be understood, even by non-experts. You've probably used a decision tree before to make a decision in your own life.