Not enough data to create a plot.
Try a different view from the menu above.
Cavalin, Paulo
Harnessing the Power of Artificial Intelligence to Vitalize Endangered Indigenous Languages: Technologies and Experiences
Pinhanez, Claudio, Cavalin, Paulo, Storto, Luciana, Fimbow, Thomas, Cobbinah, Alexander, Nogima, Julio, Vasconcelos, Marisa, Domingues, Pedro, Mizukami, Priscila de Souza, Grell, Nicole, Gongora, Majoí, Gonçalves, Isabel
Since 2022 we have been exploring application areas and technologies in which Artificial Intelligence (AI) and modern Natural Language Processing (NLP), such as Large Language Models (LLMs), can be employed to foster the usage and facilitate the documentation of Indigenous languages which are in danger of disappearing. We start by discussing the decreasing diversity of languages in the world and how working with Indigenous languages poses unique ethical challenges for AI and NLP. To address those challenges, we propose an alternative development AI cycle based on community engagement and usage. Then, we report encouraging results in the development of high-quality machine learning translators for Indigenous languages by fine-tuning state-of-the-art (SOTA) translators with tiny amounts of data and discuss how to avoid some common pitfalls in the process. We also present prototypes we have built in projects done in 2023 and 2024 with Indigenous communities in Brazil, aimed at facilitating writing, and discuss the development of Indigenous Language Models (ILMs) as a replicable and scalable way to create spell-checkers, next-word predictors, and similar tools. Finally, we discuss how we envision a future for language documentation where dying languages are preserved as interactive language models.
Sentence-level Aggregation of Lexical Metrics Correlate Stronger with Human Judgements than Corpus-level Aggregation
Cavalin, Paulo, Domingues, Pedro Henrique, Pinhanez, Claudio
In this paper we show that corpus-level aggregation hinders considerably the capability of lexical metrics to accurately evaluate machine translation (MT) systems. With empirical experiments we demonstrate that averaging individual segment-level scores can make metrics such as BLEU and chrF correlate much stronger with human judgements and make them behave considerably more similar to neural metrics such as COMET and BLEURT. We show that this difference exists because corpus- and segment-level aggregation differs considerably owing to the classical average of ratio versus ratio of averages Mathematical problem. Moreover, as we also show, such difference affects considerably the statistical robustness of corpus-level aggregation. Considering that neural metrics currently only cover a small set of sufficiently-resourced languages, the results in this paper can help make the evaluation of MT systems for low-resource languages more trustworthy.
Different but Equal: Comparing User Collaboration with Digital Personal Assistants vs. Teams of Expert Agents
Pinhanez, Claudio S., Candello, Heloisa, Pichiliani, Mauro C., Vasconcelos, Marisa, Guerra, Melina, de Bayser, Maíra G., Cavalin, Paulo
This work compares user collaboration with conversational personal assistants vs. teams of expert chatbots. Two studies were performed to investigate whether each approach affects accomplishment of tasks and collaboration costs. Participants interacted with two equivalent financial advice chatbot systems, one composed of a single conversational adviser and the other based on a team of four experts chatbots. Results indicated that users had different forms of experiences but were equally able to achieve their goals. Contrary to the expected, there were evidences that in the teamwork situation that users were more able to predict agent behavior better and did not have an overhead to maintain common ground, indicating similar collaboration costs. The results point towards the feasibility of either of the two approaches for user collaboration with conversational agents.
Specifying and Implementing Multi-Party Conversation Rules with Finite-State-Automata
Bayser, Maira Gatti de (IBM Research) | Guerra, Melina Alberio (IBM Research) | Cavalin, Paulo (IBM Research) | Pinhanez, Claudio (IBM Research)
Current existing chatbot engines do not properly handle a group chat with many users and many chatbots. This prevents chatbots from developing their full potential as social participants. This happens because there is a lack of methods and tools to design and engineer conversation rules. The work presented in this paper has two major contributions: the presentation of a Finite-State-Automata-based DSL (Domain Specific Language), called DSL-CR, for engineering multi-party conversation rules for inter-message coherence to be used by chatbot engines; and its usage in a real-world dialogue problem with four bots and humans. With this tool, the amount of domain and programming expertise needed for creating conversation rules is reduced, and a larger group of people, like linguists, can specify the conversation rules.