VHAKG: A Multi-modal Knowledge Graph Based on Synchronized Multi-view Videos of Daily Activities
Egami, Shusaku, Ugai, Takahiro, Htun, Swe Nwe Nwe, Fukuda, Ken
–arXiv.org Artificial Intelligence
Multi-modal knowledge graphs (MMKGs), which ground various non-symbolic data (e.g., images and videos) into symbols, have attracted attention as resources enabling knowledge processing and machine learning across modalities. However, the construction of MMKGs for videos consisting of multiple events, such as daily activities, is still in the early stages. In this paper, we construct an MMKG based on synchronized multi-view simulated videos of daily activities. Besides representing the content of daily life videos as event-centric knowledge, our MMKG also includes frame-by-frame fine-grained changes, such as bounding boxes within video frames. In addition, we provide support tools for querying our MMKG. As an application example, we demonstrate that our MMKG facilitates benchmarking vision-language models by providing the necessary vision-language datasets for a tailored task.
arXiv.org Artificial Intelligence
Aug-27-2024
- Country:
- Asia
- China (0.04)
- Japan
- Honshū > Kantō
- Kanagawa Prefecture (0.04)
- Tokyo Metropolis Prefecture > Tokyo (0.15)
- Kyūshū & Okinawa > Kyūshū
- Nagasaki Prefecture > Nagasaki (0.04)
- Honshū > Kantō
- Macao (0.04)
- Europe
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- Switzerland (0.04)
- Spain > Catalonia
- North America
- Puerto Rico > Peñuelas
- Peñuelas (0.04)
- United States
- Idaho > Ada County
- Boise (0.05)
- Michigan (0.04)
- New York > New York County
- New York City (0.04)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- Utah > Salt Lake County
- Salt Lake City (0.04)
- Idaho > Ada County
- Puerto Rico > Peñuelas
- Asia
- Genre:
- Research Report (1.00)
- Technology: