How Good is the Model in Model-in-the-loop Event Coreference Resolution Annotation?
Ahmed, Shafiuddin Rehan, Nath, Abhijnan, Regan, Michael, Pollins, Adam, Krishnaswamy, Nikhil, Martin, James H.
–arXiv.org Artificial Intelligence
Annotating cross-document event coreference links is a time-consuming and cognitively demanding task that can compromise annotation quality and efficiency. To address this, we propose a model-in-the-loop annotation approach for event coreference resolution, where a machine learning model suggests likely corefering event pairs only. We evaluate the effectiveness of this approach by first simulating the annotation process and then, using a novel annotator-centric Recall-Annotation effort trade-off metric, we compare the results of various underlying models and datasets. We finally present a method for obtaining 97\% recall while substantially reducing the workload required by a fully manual annotation process. Code and data can be found at https://github.com/ahmeshaf/model_in_coref
arXiv.org Artificial Intelligence
Jun-6-2023
- Country:
- Africa > Middle East
- Morocco (0.04)
- Asia
- China > Hong Kong (0.04)
- Japan > Kyūshū & Okinawa
- Kyūshū > Miyazaki Prefecture > Miyazaki (0.04)
- Middle East > Jordan (0.04)
- Europe
- North America
- Dominican Republic (0.04)
- United States
- Colorado
- Boulder County > Boulder (0.14)
- Denver County > Denver (0.04)
- Larimer County > Fort Collins (0.04)
- Maryland > Baltimore (0.04)
- New Mexico > Santa Fe County
- Santa Fe (0.04)
- Washington > King County
- Seattle (0.14)
- Colorado
- Africa > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Technology: