How Good is the Model in Model-in-the-loop Event Coreference Resolution Annotation?
Ahmed, Shafiuddin Rehan, Nath, Abhijnan, Regan, Michael, Pollins, Adam, Krishnaswamy, Nikhil, Martin, James H.
–arXiv.org Artificial Intelligence
Annotating cross-document event coreference links is a time-consuming and cognitively demanding task that can compromise annotation quality and efficiency. To address this, we propose a model-in-the-loop annotation approach for event coreference resolution, where a machine learning model suggests likely corefering event pairs only. We evaluate the effectiveness of this approach by first simulating the annotation process and then, using a novel annotator-centric Recall-Annotation effort trade-off metric, we compare the results of various underlying models and datasets. We finally present a method for obtaining 97\% recall while substantially reducing the workload required by a fully manual annotation process. Code and data can be found at https://github.com/ahmeshaf/model_in_coref
arXiv.org Artificial Intelligence
Jun-6-2023
- Country:
- Europe (1.00)
- North America > United States
- Colorado > Boulder County
- Boulder (0.14)
- Washington > King County
- Seattle (0.14)
- Colorado > Boulder County
- Genre:
- Research Report > New Finding (0.46)
- Technology: