Goto

Collaborating Authors

 video event


AI video descriptions are coming to Blink security cameras

PCWorld

When you purchase through links in our articles, we may earn a small commission. Following in the footsteps of Ring and Google's Nest cameras, Blink subscribers will soon see AI-generated summaries of individual video events. AI is already crafting natural-language summaries of what Amazon's Ring and Google's Nest cameras are seeing, and now the AI-generated descriptions are coming to Blink cameras, too. Slated to begin rolling out today in beta to U.S. users, Blink Video Descriptions will employ AI to analyze the video events captured by Blink security cameras and then generate descriptions of what's happening. The feature, which will work with all existing Blink cameras and doorbells, will start off as a free preview for "select" Blink Basic and Plus subscribers, according to a Blink spokesperson.


EventFormer: A Node-graph Hierarchical Attention Transformer for Action-centric Video Event Prediction

Su, Qile, Zhu, Shoutai, Zhang, Shuai, Liang, Baoyu, Tong, Chao

arXiv.org Artificial Intelligence

Script event induction, which aims to predict the subsequent event based on the context, is a challenging task in NLP, achieving remarkable success in practical applications. However, human events are mostly recorded and presented in the form of videos rather than scripts, yet there is a lack of related research in the realm of vision. To address this problem, we introduce AVEP (Action-centric Video Event Prediction), a task that distinguishes itself from existing video prediction tasks through its incorporation of more complex logic and richer semantic information. We present a large structured dataset, which consists of about $35K$ annotated videos and more than $178K$ video clips of event, built upon existing video event datasets to support this task. The dataset offers more fine-grained annotations, where the atomic unit is represented as a multimodal event argument node, providing better structured representations of video events. Due to the complexity of event structures, traditional visual models that take patches or frames as input are not well-suited for AVEP. We propose EventFormer, a node-graph hierarchical attention based video event prediction model, which can capture both the relationships between events and their arguments and the coreferencial relationships between arguments. We conducted experiments using several SOTA video prediction models as well as LVLMs on AVEP, demonstrating both the complexity of the task and the value of the dataset. Our approach outperforms all these video prediction models. We will release the dataset and code for replicating the experiments and annotations.


Finding the Trigger: Causal Abductive Reasoning on Video Events

Le, Thao Minh, Le, Vuong, Do, Kien, Gupta, Sunil, Venkatesh, Svetha, Tran, Truyen

arXiv.org Artificial Intelligence

This paper introduces a new problem, Causal Abductive Reasoning on Video Events (CARVE), which involves identifying causal relationships between events in a video and generating hypotheses about causal chains that account for the occurrence of a target event. To facilitate research in this direction, we create two new benchmark datasets with both synthetic and realistic videos, accompanied by trigger-target labels generated through a novel counterfactual synthesis approach. To explore the challenge of solving CARVE, we present a Causal Event Relation Network (CERN) that examines the relationships between video events in temporal and semantic spaces to efficiently determine the root-cause trigger events. Through extensive experiments, we demonstrate the critical roles of event relational representation learning and interaction modeling in solving video causal reasoning challenges. The introduction of the CARVE task, along with the accompanying datasets and the CERN framework, will advance future research on video causal reasoning and significantly facilitate various applications, including video surveillance, root-cause analysis and movie content management.


Wyze cams are getting AI-powered video search

PCWorld

AI-powered video descriptions are the new hotness when it comes to security cameras. Amazon's Ring cameras and Google's Nest cams are already doing them, and now Wyze is joining the party. Available now as part of a new–and pricey–"Cam Unlimited Pro" plan, Wyze's "descriptive alerts" serve up AI-generated captions for video events captured by your Wyze cams, complete with "important details and contextual information." So, rather than getting Wyze alerts that just say "Driveway" or "Bedroom," subscribers to the new plan will get AI descriptions with details such as "Three babies are helping each other climb out of their cribs," or "Front Door caught a suspicious person walking up to the porch and picking up a package." Besides the AI-powered video alerts, Wyze users are also getting AI search, meaning you can find clips in your video history using natural-language queries.


Question-Answering Dense Video Events

Qin, Hangyu, Xiao, Junbin, Yao, Angela

arXiv.org Artificial Intelligence

Multimodal Large Language Models (MLLMs) have shown excellent performance in question-answering of single-event videos. In this paper, we present question-answering dense video events, a novel task that requires answering and grounding the dense-event questions in long videos, thus challenging MLLMs to faithfully comprehend and reason about multiple events occurring over extended time periods. To facilitate the study, we construct DeVE-QA - a dataset featuring 78K questions about 26K events on 10.6K long videos. We then benchmark and show that existing MLLMs excelling at single-event QA struggle to perform well in DeVE-QA. For improvement, we propose DeVi, a novel training-free MLLM approach that highlights a hierarchical captioning module, a temporal event memory module, and a self-consistency checking module to respectively detect, contextualize and memorize, and ground dense-events in long videos for question answering. Extensive experiments show that DeVi is superior at answering dense-event questions and grounding relevant video moments. Compared with existing MLLMs, it achieves a remarkable increase of 4.1 percent and 3.7 percent for G(round)QA accuracy on DeVE-QA and NExT-GQA respectively.


Beyond Grounding: Extracting Fine-Grained Event Hierarchies Across Modalities

Ayyubi, Hammad A., Thomas, Christopher, Chum, Lovish, Lokesh, Rahul, Chen, Long, Niu, Yulei, Lin, Xudong, Feng, Xuande, Koo, Jaywon, Ray, Sounak, Chang, Shih-Fu

arXiv.org Artificial Intelligence

Events describe happenings in our world that are of importance. Naturally, understanding events mentioned in multimedia content and how they are related forms an important way of comprehending our world. Existing literature can infer if events across textual and visual (video) domains are identical (via grounding) and thus, on the same semantic level. However, grounding fails to capture the intricate cross-event relations that exist due to the same events being referred to on many semantic levels. For example, in Figure 1, the abstract event of "war" manifests at a lower semantic level through subevents "tanks firing" (in video) and airplane "shot" (in text), leading to a hierarchical, multimodal relationship between the events. In this paper, we propose the task of extracting event hierarchies from multimodal (video and text) data to capture how the same event manifests itself in different modalities at different semantic levels. This reveals the structure of events and is critical to understanding them. To support research on this task, we introduce the Multimodal Hierarchical Events (MultiHiEve) dataset. Unlike prior video-language datasets, MultiHiEve is composed of news video-article pairs, which makes it rich in event hierarchies. We densely annotate a part of the dataset to construct the test benchmark. We show the limitations of state-of-the-art unimodal and multimodal baselines on this task. Further, we address these limitations via a new weakly supervised model, leveraging only unannotated video-article pairs from MultiHiEve. We perform a thorough evaluation of our proposed method which demonstrates improved performance on this task and highlight opportunities for future research.


BiLL-VTG: Bridging Large Language Models and Lightweight Visual Tools for Video-based Texts Generation

Qi, Ji, Ji, Kaixuan, Yu, Jifan, Wang, Duokang, Xu, Bin, Hou, Lei, Li, Juanzi

arXiv.org Artificial Intelligence

Building models that generate textual responses to user instructions for videos is a practical and challenging topic, as it requires both vision understanding and knowledge reasoning. Compared to language and image modalities, training efficiency remains a serious problem as existing studies train models on massive sparse videos aligned with brief descriptions. In this paper, we introduce BiLL-VTG, a fast adaptive framework that leverages large language models (LLMs) to reasoning on videos based on essential lightweight visual tools. Specifically, we reveal the key to response specific instructions is the concentration on relevant video events, and utilize two visual tools of structured scene graph generation and descriptive image caption generation to gather and represent the events information. Thus, a LLM equipped with world knowledge is adopted as the reasoning agent to achieve the response by performing multiple reasoning steps on specified video events.To address the difficulty of specifying events from agent, we further propose an Instruction-oriented Video Events Recognition (InsOVER) algorithm based on the efficient Hungarian matching to localize corresponding video events using linguistic instructions, enabling LLMs to interact with long videos. Extensive experiments on two typical video-based texts generations tasks show that our tuning-free framework outperforms the pre-trained models including Flamingo-80B, to achieve the state-of-the-art performance.


Video Abnormal Event Detection by Learning to Complete Visual Cloze Tests

Wang, Siqi, Yu, Guang, Cai, Zhiping, Liu, Xinwang, Zhu, En, Yin, Jianping, Liao, Qing

arXiv.org Artificial Intelligence

Video abnormal event detection (VAD) is a vital semi-supervised task that requires learning with only roughly labeled normal videos, as anomalies are often practically unavailable. Although deep neural networks (DNNs) enable great progress in VAD, existing solutions typically suffer from two issues: (1) The precise and comprehensive localization of video events is ignored. (2) The video semantics and temporal context are under-explored. To address those issues, we are motivated by the prevalent cloze test in education and propose a novel approach named visual cloze completion (VCC), which performs VAD by learning to complete "visual cloze tests" (VCTs). Specifically, VCC first localizes each video event and encloses it into a spatio-temporal cube (STC). To achieve both precise and comprehensive localization, appearance and motion are used as mutually complementary cues to mark the object region associated with each video event. For each marked region, a normalized patch sequence is extracted from temporally adjacent frames and stacked into the STC. By comparing each patch and the patch sequence of a STC to a visual "word" and "sentence" respectively, we can deliberately erase a certain "word" (patch) to yield a VCT. DNNs are then trained to infer the erased patch by video semantics, so as to complete the VCT. To fully exploit the temporal context, each patch in STC is alternatively erased to create multiple VCTs, and the erased patch's optical flow is also inferred to integrate richer motion clues. Meanwhile, a new DNN architecture is designed as a model-level solution to utilize video semantics and temporal context. Extensive experiments demonstrate that VCC achieves state-of-the-art VAD performance. Our codes and results are open at \url{https://github.com/yuguangnudt/VEC_VAD/tree/VCC}