signwriting
Leveraging Large Language Models for Accurate Sign Language Translation in Low-Resource Scenarios
Bulla, Luana, Tuccio, Gabriele, Mongiovì, Misael, Gangemi, Aldo
Translating natural languages into sign languages is a highly complex and underexplored task. Despite growing interest in accessibility and inclusivity, the development of robust translation systems remains hindered by the limited availability of parallel corpora which align natural language with sign language data. Existing methods often struggle to generalize in these data-scarce environments, as the few datasets available are typically domain-specific, lack standardization, or fail to capture the full linguistic richness of sign languages. To address this limitation, we propose Advanced Use of LLMs for Sign Language Translation (AulSign), a novel method that leverages Large Language Models via dynamic prompting and in-context learning with sample selection and subsequent sign association. Despite their impressive abilities in processing text, LLMs lack intrinsic knowledge of sign languages; therefore, they are unable to natively perform this kind of translation. To overcome this limitation, we associate the signs with compact descriptions in natural language and instruct the model to use them. We evaluate our method on both English and Italian languages using SignBank+, a recognized benchmark in the field, as well as the Italian LaCAM CNR-ISTC dataset. We demonstrate superior performance compared to state-of-the-art models in low-data scenario. Our findings demonstrate the effectiveness of AulSign, with the potential to enhance accessibility and inclusivity in communication technologies for underrepresented linguistic communities.
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
- Asia > Middle East > Kuwait (0.04)
- Research Report > New Finding (1.00)
- Research Report > Promising Solution (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Machine Translation (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.70)
A concrete example of inclusive design: deaf-oriented accessibility
Bianchini, Claudia S., Borgia, Fabrizio, de Marsico, Maria
One of the continuing challenges of Human Computer Interaction research is the full inclusion of people with special needs into the digital world. In particular, this crucial category includes people that experiences some kind of limitation in exploiting traditional information communication channels. One immediately thinks about blind people, and several researches aim at addressing their needs. On the contrary, limitations suffered by deaf people are often underestimated. This often the result of a kind of ignorance or misunderstanding of the real nature of their communication difficulties. This chapter aims at both increasing the awareness of deaf problems in the digital world, and at proposing the project of a comprehensive solution for their better inclusion. As for the former goal, we will provide a bird's-eye presentation of history and evolution of understanding of deafness issues, and of strategies to address them. As for the latter, we will present the design, implementation and evaluation of the first nucleus of a comprehensive digital framework to facilitate the access of deaf people into the digital world.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Germany > Berlin (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- (11 more...)
- Research Report (0.50)
- Workflow (0.46)
- Instructional Material (0.46)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.56)
Towards improving the e-learning experience for deaf students: e-LUX
Borgia, Fabrizio, Bianchini, Claudia S., de Marsico, Maria
Deaf people are more heavily a ffected by the digital divide than many would expect. Moreover, most a ccessibility guidelines address ing their needs just deal with captioning and audio-content transcriptio n. However, this approach to the problem does not consider that deaf people have big troubles with vocal languages, even in their written form. At present, only a few organizations, like W3C, produced guidelines deal ing with one of their most distinctive expressions: Sign Language (SL). SL is, in fact, the visual -gestural language used by many deaf people to communicate with each other. The present work aims at supporting e-learning user experience (e - LUX) for these speci fic users by enhancing the accessibility of content and container services. In particular, we propose preliminary solutions to tailor activities which can be more fruitful when performed in one's own " native" language, which for most deaf people, especially younger ones, is represen ted by national SL.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.05)
- North America > United States > California > San Diego County > La Jolla (0.04)
- (9 more...)
- Education > Educational Setting > Online (0.74)
- Education > Educational Technology > Educational Software > Computer Based Training (0.64)
- Health & Medicine > Consumer Health (0.50)
- Education > Focused Education > Special Education > Hearing Impaired (0.41)
signwriting-evaluation: Effective Sign Language Evaluation via SignWriting
Moryossef, Amit, Zilberman, Rotem, Langer, Ohad
The lack of automatic evaluation metrics tailored for SignWriting presents a significant obstacle in developing effective transcription and translation models for signed languages. This paper introduces a comprehensive suite of evaluation metrics specifically designed for SignWriting, including adaptations of standard metrics such as \texttt{BLEU} and \texttt{chrF}, the application of \texttt{CLIPScore} to SignWriting images, and a novel symbol distance metric unique to our approach. We address the distinct challenges of evaluating single signs versus continuous signing and provide qualitative demonstrations of metric efficacy through score distribution analyses and nearest-neighbor searches within the SignBank corpus. Our findings reveal the strengths and limitations of each metric, offering valuable insights for future advancements using SignWriting. This work contributes essential tools for evaluating SignWriting models, facilitating progress in the field of sign language processing. Our code is available at \url{https://github.com/sign-language-processing/signwriting-evaluation}.
- North America > United States > Pennsylvania (0.04)
- North America > Dominican Republic (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- (5 more...)
sign.mt: Real-Time Multilingual Sign Language Translation Application
This demo paper presents sign.mt, an open-source application pioneering real-time multilingual bi-directional translation between spoken and signed languages. Harnessing state-of-the-art open-source models, this tool aims to address the communication divide between the hearing and the deaf, facilitating seamless translation in both spoken-to-signed and signed-to-spoken translation directions. Promising reliable and unrestricted communication, sign.mt offers offline functionality, crucial in areas with limited internet connectivity. It further enhances user engagement by offering customizable photo-realistic sign language avatars, thereby encouraging a more personalized and authentic user experience. Licensed under CC BY-NC-SA 4.0, sign.mt signifies an important stride towards open, inclusive communication. The app can be used, and modified for personal and academic uses, and even supports a translation API, fostering integration into a wider range of applications. However, it is by no means a finished product. We invite the NLP community to contribute towards the evolution of sign.mt. Whether it be the integration of more refined models, the development of innovative pipelines, or user experience improvements, your contributions can propel this project to new heights. Available at https://sign.mt, it stands as a testament to what we can achieve together, as we strive to make communication accessible to all.
- North America > Dominican Republic (0.04)
- Europe > Croatia > Dubrovnik-Neretva County > Dubrovnik (0.04)
Machine Translation between Spoken Languages and Signed Languages Represented in SignWriting
Jiang, Zifan, Moryossef, Amit, Müller, Mathias, Ebling, Sarah
This paper presents work on novel machine translation (MT) systems between spoken and signed languages, where signed languages are represented in SignWriting, a sign language writing system. Our work seeks to address the lack of out-of-the-box support for signed languages in current MT systems and is based on the SignBank dataset, which contains pairs of spoken language text and SignWriting content. We introduce novel methods to parse, factorize, decode, and evaluate SignWriting, leveraging ideas from neural factored MT. In a bilingual setup--translating from American Sign Language to (American) English--our method achieves over 30 BLEU, while in two multilingual setups--translating in both directions between spoken languages and signed languages--we achieve over 20 BLEU. We find that common MT techniques used to improve spoken language translation similarly affect the performance of sign language translation. These findings validate our use of an intermediate text representation for signed languages to include them in natural language processing research.
- South America > Brazil (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- North America > Canada > Quebec (0.04)
- (8 more...)