EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations
Darkhalil, Ahmad, Shan, Dandan, Zhu, Bin, Ma, Jian, Kar, Amlan, Higgins, Richard, Fidler, Sanja, Fouhey, David, Damen, Dima
–arXiv.org Artificial Intelligence
We introduce VISOR, a new dataset of pixel annotations and a benchmark suite for segmenting hands and active objects in egocentric video. VISOR annotates videos from EPIC-KITCHENS, which comes with a new set of challenges not encountered in current video segmentation datasets. Specifically, we need to ensure both short- and long-term consistency of pixel-level annotations as objects undergo transformative interactions, e.g. an onion is peeled, diced and cooked - where we aim to obtain accurate pixel-level annotations of the peel, onion pieces, chopping board, knife, pan, as well as the acting hands. VISOR introduces an annotation pipeline, AI-powered in parts, for scalability and quality. In total, we publicly release 272K manual semantic masks of 257 object classes, 9.9M interpolated dense masks, 67K hand-object relations, covering 36 hours of 179 untrimmed videos. Along with the annotations, we introduce three challenges in video object segmentation, interaction understanding and long-term reasoning. For data, code and leaderboards: http://epic-kitchens.github.io/VISOR
arXiv.org Artificial Intelligence
Sep-26-2022
- Country:
- Europe > United Kingdom
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > Michigan (0.04)
- Canada > Ontario
- Genre:
- Research Report (1.00)
- Industry:
- Government (0.67)
- Information Technology (1.00)
- Law (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (0.45)
- Natural Language > Text Processing (0.67)
- Vision (1.00)
- Information Technology > Artificial Intelligence