HA-ViD: A Human Assembly Video Dataset for Comprehensive Assembly Knowledge Understanding
–Neural Information Processing Systems
Understanding comprehensive assembly knowledge from videos is critical for futuristic ultra-intelligent industry. To enable technological breakthrough, we present HA-ViD - an assembly video dataset that features representative industrial assembly scenarios, natural procedural knowledge acquisition process, and consistent human-robot shared annotations. Specifically, HA-ViD captures diverse collaboration patterns of real-world assembly, natural human behaviors and learning progression during assembly, and granulate action annotations to subject, action verb, manipulated object, target object, and tool. We provide 3222 multi-view and multi-modality videos, 1.5M frames, 96K temporal labels and 2M spatial labels. We benchmark four foundational video understanding tasks: action recognition, action segmentation, object detection and multi-object tracking. Importantly, we analyze their performance and the further reasoning steps for comprehending knowledge in assembly progress, process efficiency, task collaboration, skill parameters and human intention. Details of HA-ViD is available at: https://iai-hrc.github.io/ha-vid.
Neural Information Processing Systems
Feb-11-2025, 12:49:29 GMT
- Country:
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science > Problem Solving (0.48)
- Machine Learning (1.00)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Vision > Video Understanding (0.36)
- Information Technology > Artificial Intelligence