Goto

Collaborating Authors

 english




Is the Dictionary Done For?

The New Yorker

Is the Dictionary Done For? The print edition of Merriam-Webster was once a touchstone of authority and stability. Then the internet brought about a revolution. Wars over words are inevitably culture wars, and debates over the dictionary have raged for as long as it has existed. Once, every middle-class home had a piano and a dictionary. The purpose of the piano was to be able to listen to music before phonographs were available and affordable. Later on, it was to torture young persons by insisting that they learn to do something few people do well. The purpose of the dictionary was to settle intra-family disputes over the spelling of words like "camaraderie" and "sesquipedalian," or over the correct pronunciation of "puttee." This was the state of the world not that long ago. In the late nineteen-eighties, Merriam-Webster's Collegiate Dictionary was on the best-seller list for a hundred and fifty-five consecutive weeks. Fifty-seven million copies were sold, a number believed to be second only, in this country, to sales of the Bible. There was good money in the word business.


Russia-Ukraine war: List of key events, day 1,313

Al Jazeera

Can Ukraine restore its pre-war borders? Why are Tomahawk missiles for Ukraine a'red line' for Russia? Is Russia testing NATO with aerial incursions in Europe? Russian forces killed four people, including a 12-year-old girl, and injured 13 in an attack on Ukraine's capital, Kyiv, on Sunday night, Tymur Tkachenko, the head of Kyiv's military administration, wrote in a post on Telegram. Those killed also included staff and patients at a cardiology centre, Tkachenko added.


Where to Go to Get Serious About Learning a Language: Lingoda, Preply, Fluenz

WIRED

To really speak and understand a new language, you need to interact with humans. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Language learning apps like Duolingo are useful, but they have their limits. They're ideal for getting started with a new language, beefing up vocabulary, practicing skills, and even having fun playing the built-in games.


How AI and Wikipedia have sent vulnerable languages into a doom spiral

MIT Technology Review

Machine translators have made it easier than ever to create error-plagued Wikipedia articles in obscure languages. What happens when AI models get trained on junk pages? When Kenneth Wehr started managing the Greenlandic-language version of Wikipedia four years ago, his first act was to delete almost everything. It had to go, he thought, if it had any chance of surviving. Wehr, who's 26, isn't from Greenland--he grew up in Germany--but he had become obsessed with the island, an autonomous Danish territory, after visiting as a teenager. He'd spent years writing obscure Wikipedia articles in his native tongue on virtually everything to do with it. He even ended up moving to Copenhagen to study Greenlandic, a language spoken by some 57,000 mostly Indigenous Inuit people scattered across dozens of far-flung Arctic villages. The Greenlandic-language edition was added to Wikipedia around 2003, just a few years after the site launched in English. By the time Wehr took its helm nearly 20 years later, hundreds of Wikipedians had contributed to it and had collectively written some 1,500 articles totaling over tens of thousands of words.


From No to Know: Taxonomy, Challenges, and Opportunities for Negation Understanding in Multimodal Foundation Models

Vatsa, Mayank, Bharati, Aparna, Mittal, Surbhi, Singh, Richa

arXiv.org Artificial Intelligence

Negation, a linguistic construct conveying absence, denial, or contradiction, poses significant challenges for multilingual multimodal foundation models. These models excel in tasks like machine translation, text-guided generation, image captioning, audio interactions, and video processing but often struggle to accurately interpret negation across diverse languages and cultural contexts. In this perspective paper, we propose a comprehensive taxonomy of negation constructs, illustrating how structural, semantic, and cultural factors influence multimodal foundation models. We present open research questions and highlight key challenges, emphasizing the importance of addressing these issues to achieve robust negation handling. Finally, we advocate for specialized benchmarks, language-specific tokenization, fine-grained attention mechanisms, and advanced multimodal architectures. These strategies can foster more adaptable and semantically precise multimodal foundation models, better equipped to navigate and accurately interpret the complexities of negation in multilingual, multimodal environments.


Fleurs-SLU: A Massively Multilingual Benchmark for Spoken Language Understanding

Schmidt, Fabian David, Vulić, Ivan, Glavaš, Goran, Adelani, David Ifeoluwa

arXiv.org Artificial Intelligence

While recent multilingual automatic speech recognition models claim to support thousands of languages, ASR for low-resource languages remains highly unreliable due to limited bimodal speech and text training data. Better multilingual spoken language understanding (SLU) can strengthen massively the robustness of multilingual ASR by levering language semantics to compensate for scarce training data, such as disambiguating utterances via context or exploiting semantic similarities across languages. Even more so, SLU is indispensable for inclusive speech technology in roughly half of all living languages that lack a formal writing system. However, the evaluation of multilingual SLU remains limited to shallower tasks such as intent classification or language identification. To address this, we present Fleurs-SLU, a multilingual SLU benchmark that encompasses topical speech classification in 102 languages and multiple-choice question answering through listening comprehension in 92 languages. We extensively evaluate both end-to-end speech classification models and cascaded systems that combine speech-to-text transcription with subsequent classification by large language models on Fleurs-SLU. Our results show that cascaded systems exhibit greater robustness in multilingual SLU tasks, though speech encoders can achieve competitive performance in topical speech classification when appropriately pre-trained. We further find a strong correlation between robust multilingual ASR, effective speech-to-text translation, and strong multilingual SLU, highlighting the mutual benefits between acoustic and semantic speech representations.


JMedBench: A Benchmark for Evaluating Japanese Biomedical Large Language Models

Jiang, Junfeng, Huang, Jiahao, Aizawa, Akiko

arXiv.org Artificial Intelligence

Recent developments in Japanese large language models (LLMs) primarily focus on general domains, with fewer advancements in Japanese biomedical LLMs. One obstacle is the absence of a comprehensive, large-scale benchmark for comparison. Furthermore, the resources for evaluating Japanese biomedical LLMs are insufficient. To advance this field, we propose a new benchmark including eight LLMs across four categories and 20 Japanese biomedical datasets across five tasks. Experimental results indicate that: (1) LLMs with a better understanding of Japanese and richer biomedical knowledge achieve better performance in Japanese biomedical tasks, (2) LLMs that are not mainly designed for Japanese biomedical domains can still perform unexpectedly well, and (3) there is still much room for improving the existing LLMs in certain Japanese biomedical tasks. Moreover, we offer insights that could further enhance development in this field. Our evaluation tools tailored to our benchmark as well as the datasets are publicly available in https://huggingface.co/datasets/Coldog2333/JMedBench to facilitate future research.


Are conscious machines possible? - Big Think

Oxford Comp Sci

MICHAEL WOOLDRIDGE: AI is not about trying to create life, right? But it's kind of, very much feels like that. I mean, if we ever achieved the ultimate dream of AI, which I call the "Hollywood dream of AI," the kind of thing that we see in Hollywood movies, then we will have created machines that are conscious, potentially, in the same way that human beings are. So it's very like that kind of dream of creating life- and that, in itself, is a very old dream. It goes back to the ancient Greeks: The Greeks had myths about the blacksmiths to the gods who could create life from metal creatures.