Constantin, Mihai Gabriel
Overview of The MediaEval 2022 Predicting Video Memorability Task
Sweeney, Lorin, Constantin, Mihai Gabriel, Demarty, Claire-Hélène, Fosco, Camilo, de Herrera, Alba G. Seco, Halder, Sebastian, Healy, Graham, Ionescu, Bogdan, Matran-Fernandez, Ana, Smeaton, Alan F., Sultana, Mushfika
This paper describes the 5th edition of the Predicting Video Memorability Task as part of MediaEval2022. This year we have reorganised and simplified the task in order to lubricate a greater depth of inquiry. Similar to last year, two datasets are provided in order to facilitate generalisation, however, this year we have replaced the TRECVid2019 Video-to-Text dataset with the VideoMem dataset in order to remedy underlying data quality issues, and to prioritise short-term memorability prediction by elevating the Memento10k dataset as the primary dataset. Additionally, a fully fledged electroencephalography (EEG)-based prediction sub-task is introduced. In this paper, we outline the core facets of the task and its constituent sub-tasks; describing the datasets, evaluation metrics, and requirements for participant submissions.
Experiences from the MediaEval Predicting Media Memorability Task
de Herrera, Alba García Deco, Constantin, Mihai Gabriel, Demarty, Chaire-Hélène, Fosco, Camilo, Halder, Sebastian, Healy, Graham, Ionescu, Bogdan, Matran-Fernandez, Ana, Smeaton, Alan F., Sultana, Mushfika, Sweeney, Lorin
The Predicting Media Memorability task in the MediaEval evaluation campaign has been running annually since 2018 and several different tasks and data sets have been used in this time. This has allowed us to compare the performance of many memorability prediction techniques on the same data and in a reproducible way and to refine and improve on those techniques. The resources created to compute media memorability are now being used by researchers well beyond the actual evaluation campaign. In this paper we present a summary of the task, including the collective lessons we have learned for the research community.
Overview of The MediaEval 2021 Predicting Media Memorability Task
Kiziltepe, Rukiye Savran, Constantin, Mihai Gabriel, Demarty, Claire-Helene, Healy, Graham, Fosco, Camilo, de Herrera, Alba Garcia Seco, Halder, Sebastian, Ionescu, Bogdan, Matran-Fernandez, Ana, Smeaton, Alan F., Sweeney, Lorin
This paper describes the MediaEval 2021 Predicting Media Memorability}task, which is in its 4th edition this year, as the prediction of short-term and long-term video memorability remains a challenging task. In 2021, two datasets of videos are used: first, a subset of the TRECVid 2019 Video-to-Text dataset; second, the Memento10K dataset in order to provide opportunities to explore cross-dataset generalisation. In addition, an Electroencephalography (EEG)-based prediction pilot subtask is introduced. In this paper, we outline the main aspects of the task and describe the datasets, evaluation metrics, and requirements for participants' submissions.
An Annotated Video Dataset for Computing Video Memorability
Kiziltepe, Rukiye Savran, Sweeney, Lorin, Constantin, Mihai Gabriel, Doctor, Faiyaz, de Herrera, Alba Garcia Seco, Demarty, Claire-Helene, Healy, Graham, Ionescu, Bogdan, Smeaton, Alan F.
Using a collection of publicly available links to short form video clips of an average of 6 seconds duration each, 1,275 users manually annotated each video multiple times to indicate both long-term and short-term memorability of the videos. The annotations were gathered as part of an online memory game and measured a participant's ability to recall having seen the video previously when shown a collection of videos. The recognition tasks were performed on videos seen within the previous few minutes for short-term memorability and within the previous 24 to 72 hours for long-term memorability. Data includes the reaction times for each recognition of each video. Associated with each video are text descriptions (captions) as well as a collection of image-level features applied to 3 frames extracted from each video (start, middle and end). Video-level features are also provided. The dataset was used in the Video Memorability task as part of the MediaEval benchmark in 2020.
Overview of MediaEval 2020 Predicting Media Memorability Task: What Makes a Video Memorable?
De Herrera, Alba García Seco, Kiziltepe, Rukiye Savran, Chamberlain, Jon, Constantin, Mihai Gabriel, Demarty, Claire-Hélène, Doctor, Faiyaz, Ionescu, Bogdan, Smeaton, Alan F.
This paper describes the MediaEval 2020 \textit{Predicting Media Memorability} task. After first being proposed at MediaEval 2018, the Predicting Media Memorability task is in its 3rd edition this year, as the prediction of short-term and long-term video memorability (VM) remains a challenging task. In 2020, the format remained the same as in previous editions. This year the videos are a subset of the TRECVid 2019 Video-to-Text dataset, containing more action rich video content as compared with the 2019 task. In this paper a description of some aspects of this task is provided, including its main characteristics, a description of the collection, the ground truth dataset, evaluation metrics and the requirements for participants' run submissions.