Pinhanez, Claudio
Harnessing the Power of Artificial Intelligence to Vitalize Endangered Indigenous Languages: Technologies and Experiences
Pinhanez, Claudio, Cavalin, Paulo, Storto, Luciana, Fimbow, Thomas, Cobbinah, Alexander, Nogima, Julio, Vasconcelos, Marisa, Domingues, Pedro, Mizukami, Priscila de Souza, Grell, Nicole, Gongora, Majoí, Gonçalves, Isabel
Since 2022 we have been exploring application areas and technologies in which Artificial Intelligence (AI) and modern Natural Language Processing (NLP), such as Large Language Models (LLMs), can be employed to foster the usage and facilitate the documentation of Indigenous languages which are in danger of disappearing. We start by discussing the decreasing diversity of languages in the world and how working with Indigenous languages poses unique ethical challenges for AI and NLP. To address those challenges, we propose an alternative development AI cycle based on community engagement and usage. Then, we report encouraging results in the development of high-quality machine learning translators for Indigenous languages by fine-tuning state-of-the-art (SOTA) translators with tiny amounts of data and discuss how to avoid some common pitfalls in the process. We also present prototypes we have built in projects done in 2023 and 2024 with Indigenous communities in Brazil, aimed at facilitating writing, and discuss the development of Indigenous Language Models (ILMs) as a replicable and scalable way to create spell-checkers, next-word predictors, and similar tools. Finally, we discuss how we envision a future for language documentation where dying languages are preserved as interactive language models.
Sentence-level Aggregation of Lexical Metrics Correlate Stronger with Human Judgements than Corpus-level Aggregation
Cavalin, Paulo, Domingues, Pedro Henrique, Pinhanez, Claudio
In this paper we show that corpus-level aggregation hinders considerably the capability of lexical metrics to accurately evaluate machine translation (MT) systems. With empirical experiments we demonstrate that averaging individual segment-level scores can make metrics such as BLEU and chrF correlate much stronger with human judgements and make them behave considerably more similar to neural metrics such as COMET and BLEURT. We show that this difference exists because corpus- and segment-level aggregation differs considerably owing to the classical average of ratio versus ratio of averages Mathematical problem. Moreover, as we also show, such difference affects considerably the statistical robustness of corpus-level aggregation. Considering that neural metrics currently only cover a small set of sufficiently-resourced languages, the results in this paper can help make the evaluation of MT systems for low-resource languages more trustworthy.
Creating an African American-Sounding TTS: Guidelines, Technical Challenges,and Surprising Evaluations
Pinhanez, Claudio, Fernandez, Raul, Grave, Marcelo, Nogima, Julio, Hoory, Ron
This poses challenges for applications interested in targeting specific demographics (e.g., an African American business or NGO; a voice-tutoring system for children that are not of White ethnicity, etc.). The ultimate goal of the project described in this paper is to provide to designers, developers, and enterprises the choice of having a professional voice which is clearly recognizable as African American, and therefore more able to address diversity and inclusiveness issues. Being more precise, our goal is to create an African American Text-to-Speech system, which we will refer simply as an African American voice or AA voice, able to produce synthetic audio segments from standard English texts, and which will be recognized by African American speakers and non-speakers as sounding like a native African American speaker. The AA voice should exhibit a level of technical quality similar to the Standard American English (SAE) synthetic voices currently available through professional platforms. The evaluation of the technical quality of the AA voice, however, is not addressed in this paper, which focuses primarily on whether the AA voice can be recognized as sounding like an African American speaker. Linguists [27, 28] have described a continuum of dialects under what is often termed African American Vernacular English (AAVE). At one end of the spectrum, one finds the largest deviation from SAE in terms of lexicon (including slang), syntax and morphology, and phonological/phonetic properties. At the other end, AAVE speakers begin to approach SAE in terms of lexicon and grammar but still retain marked speech characteristics (primarily in terms of intonation, phonation, and vowel placement [14, 28]) which grant the speech a distinctive identity which listeners use as cues in the perception of African American English [44].
Specifying and Implementing Multi-Party Conversation Rules with Finite-State-Automata
Bayser, Maira Gatti de (IBM Research) | Guerra, Melina Alberio (IBM Research) | Cavalin, Paulo (IBM Research) | Pinhanez, Claudio (IBM Research)
Current existing chatbot engines do not properly handle a group chat with many users and many chatbots. This prevents chatbots from developing their full potential as social participants. This happens because there is a lack of methods and tools to design and engineer conversation rules. The work presented in this paper has two major contributions: the presentation of a Finite-State-Automata-based DSL (Domain Specific Language), called DSL-CR, for engineering multi-party conversation rules for inter-message coherence to be used by chatbot engines; and its usage in a real-world dialogue problem with four bots and humans. With this tool, the amount of domain and programming expertise needed for creating conversation rules is reduced, and a larger group of people, like linguists, can specify the conversation rules.