signwriting-evaluation: Effective Sign Language Evaluation via SignWriting
Moryossef, Amit, Zilberman, Rotem, Langer, Ohad
–arXiv.org Artificial Intelligence
The lack of automatic evaluation metrics tailored for SignWriting presents a significant obstacle in developing effective transcription and translation models for signed languages. This paper introduces a comprehensive suite of evaluation metrics specifically designed for SignWriting, including adaptations of standard metrics such as \texttt{BLEU} and \texttt{chrF}, the application of \texttt{CLIPScore} to SignWriting images, and a novel symbol distance metric unique to our approach. We address the distinct challenges of evaluating single signs versus continuous signing and provide qualitative demonstrations of metric efficacy through score distribution analyses and nearest-neighbor searches within the SignBank corpus. Our findings reveal the strengths and limitations of each metric, offering valuable insights for future advancements using SignWriting. This work contributes essential tools for evaluating SignWriting models, facilitating progress in the field of sign language processing. Our code is available at \url{https://github.com/sign-language-processing/signwriting-evaluation}.
arXiv.org Artificial Intelligence
Oct-17-2024
- Country:
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Croatia > Dubrovnik-Neretva County
- Dubrovnik (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Middle East > Malta
- Eastern Region > Northern Harbour District > St. Julian's (0.04)
- Portugal > Lisbon
- Lisbon (0.04)
- Switzerland > Zürich
- Zürich (0.04)
- Belgium > Brussels-Capital Region
- North America
- Dominican Republic (0.04)
- United States > Pennsylvania (0.04)
- Europe
- Genre:
- Research Report (0.70)
- Industry:
- Education > Curriculum > Subject-Specific Education (0.85)
- Technology: