Action Dynamics Task Graphs for Learning Plannable Representations of Procedural Tasks
Mao, Weichao, Desai, Ruta, Iuzzolino, Michael Louis, Kamra, Nitin
–arXiv.org Artificial Intelligence
ADTG focuses solely on actions and avoids representing states in the graph, thereby making the size of the graph With the advent of augmented reality and advanced visionpowered much smaller than typical task graph representations. It uses AI systems, we envision a future of next generation robust visual representations of actions learnt by treating actions AI assistants that will be able to deeply understand the athome as "transformations between states". We also present tasks that users are doing from visual data and assist an approach to learn: (i) task tracking and (ii) next action them to accomplish these tasks. These AI assistants with reasoning prediction models based on ADTG using video demonstrations capabilities would be able to track the user's actions and paired action annotations of a procedural task. in an ongoing complex task, detect mistakes, and provide actionable guidance to the users such as next steps to take. Our approach allows us to observe users while they perform Such user-centric guidance can either help the user better procedural tasks and generate actionable plans for perform a task or help them learn a new task more efficiently.
arXiv.org Artificial Intelligence
Jan-11-2023
- Genre:
- Research Report (0.50)
- Workflow (0.47)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks (0.46)
- Statistical Learning (0.46)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence