Collaborating Authors

AI Detects Autism Speech Patterns Across Different Languages - Neuroscience News


Summary: Machine learning algorithms help researchers identify speech patterns in children on the autism spectrum that are consistent between different languages. A new study led by Northwestern University researchers used machine learning--a branch of artificial intelligence--to identify speech patterns in children with autism that were consistent between English and Cantonese, suggesting that features of speech might be a useful tool for diagnosing the condition. Undertaken with collaborators in Hong Kong, the study yielded insights that could help scientists distinguish between genetic and environmental factors shaping the communication abilities of people with autism, potentially helping them learn more about the origin of the condition and develop new therapies. Children with autism often talk more slowly than typically developing children, and exhibit other differences in pitch, intonation and rhythm. But those differences (called "prosodic differences'" by researchers) have been surprisingly difficult to characterize in a consistent, objective way, and their origins have remained unclear for decades. However, a team of researchers led by Northwestern scientists Molly Losh and Joseph C.Y. Lau, along with Hong Kong-based collaborator Patrick Wong and his team, successfully used supervised machine learning to identify speech differences associated with autism.

Gold Award: Conversational Artificial Intelligence


InfoTalk Corporation Limited, a leader in conversational artificial intelligence technologies, today announced receiving the Gold Award from the Hong Kong ICT Awards for its flagship product, InfoTalk-Speaker 10.0. This Version 10 leapfrogs text-to-speech technology to a new frontier, enabling computers, robots, and any automated systems to speak in natural human voices like those in sci-fi movies. It opens up a whole new horizon for uncharted waters of digital applications. InfoTalk-Speaker is available in multiple languages. It speaks with the tonal precision of human native speakers of Cantonese and Putonghua.

A Survey of Code-switched Speech and Language Processing Machine Learning

Code-switching, the alternation of languages within a conversation or utterance, is a common communicative phenomenon that occurs in multilingual communities across the world. This survey reviews computational approaches for code-switched Speech and Natural Language Processing. We motivate why processing code-switched text and speech is essential for building intelligent agents and systems that interact with users in multilingual communities. As code-switching data and resources are scarce, we list what is available in various code-switched language pairs with the language processing tasks they can be used for. We review code-switching research in various Speech and NLP applications, including language processing tools and end-to-end systems. We conclude with future directions and open problems in the field.

An Emotion Detection System for Cantonese

AAAI Conferences

We present the first automatic emotion detection system for Cantonese. This system classifies input text into eight emotion classes: expectancy, joy, love, surprise, anxiety, sorrow, angry, or hate. While a number of emotion corpora and lexica for Mandarin Chinese have been developed, no emotion dataset is available for Cantonese. We leverage existing Mandarin Chinese emotion resources to build the system, with support from Cantonese-Mandarin lexical mappings from a machine translation system, as well as English-Mandarin lexical mappings to handle code-switching in Cantonese input. Evaluation on a set of Cantonese sentences from social media shows promising results.

Northwestern University Researchers Used Machine Learning To Identify Speech Patterns In Children With Autism That Were Consistent Between English And Cantonese


According to observations, children with autism frequently speak more slowly than similarly developing kids. They differ in their speech in other ways, most notably in tone, intonation, and rhythm. It is very challenging to consistently and objectively describe these "prosodic" distinctions, and it has been decades since their roots have been identified. Researchers from Northwestern University and Hong Kong collaborated on a study to shed light on the causes and diagnoses of this illness. This method uses machine learning to find speech patterns in autistic children that are similar in Cantonese and English.