Collaborating Authors

Reproducibility Report: Contextualizing Hate Speech Classifiers with Post-hoc Explanation Artificial Intelligence

The presented report evaluates Contextualizing Hate Speech Classifiers with Post-hoc Explanation Kennedy et al. (2020) paper within the scope of ML Reproducibility Challenge 2020. Our work focuses on both aspects constituting the paper: the method itself and the validity of the stated results. In the following sections, we have described the paper, related works, algorithmic frameworks, our experiments and evaluations. Scope of Reproducibility For the GHC (a dataset), the most important difference between BERT WR and BERT SOC is the increase in recall. While, for Stormfront (a dataset), there are similar improvements for in-domain data and the NYT dataset. But, for verifying the claims we also have tried to run the same experiment on a new data-set.

EMOVIE: A Mandarin Emotion Speech Dataset with a Simple Emotional Text-to-Speech Model Artificial Intelligence

Recently, there has been an increasing interest in neural speech synthesis. While the deep neural network achieves the state-of-the-art result in text-to-speech (TTS) tasks, how to generate a more emotional and more expressive speech is becoming a new challenge to researchers due to the scarcity of high-quality emotion speech dataset and the lack of advanced emotional TTS model. In this paper, we first briefly introduce and publicly release a Mandarin emotion speech dataset including 9,724 samples with audio files and its emotion human-labeled annotation. After that, we propose a simple but efficient architecture for emotional speech synthesis called EMSpeech. Unlike those models which need additional reference audio as input, our model could predict emotion labels just from the input text and generate more expressive speech conditioned on the emotion embedding. In the experiment phase, we first validate the effectiveness of our dataset by an emotion classification task. Then we train our model on the proposed dataset and conduct a series of subjective evaluations. Finally, by showing a comparable performance in the emotional speech synthesis task, we successfully demonstrate the ability of the proposed model.

Google's Tensorflow team open-sources speech recognition dataset for DIY AI


Google researchers open-sourced a dataset today to give DIY makers interested in artificial intelligence more tools to create basic voice commands for a range of smart devices. Created by the TensorFlow and AIY teams at Google, the Speech Commands dataset is a collection of 65,000 utterances of 30 words for the training and inference of AI models. AIY Projects was launched in May to support do-it-yourself makers who want to tinker with AI. The initiative plans to launch a series of reference designs, and began with speech recognition and a smart speaker you can make in a cardboard box. "The infrastructure we used to create the data has been open sourced too, and we hope to see it used by the wider community to create their own versions, especially to cover underserved languages and applications," Google Brain software engineer Pete Warden wrote in a blog post today.

MLCommons debuts with public 86,000-hour speech data set for AI researchers – TechCrunch


If you want to make a machine learning system, you need data for it, but that data isn't always easy to come by. MLCommons aims to unite disparate companies and organizations in the creation of large public databases for AI training, so that researchers around the world can work together at higher levels, and in doing so advance the nascent field as a whole. Its first effort, the People's Speech Dataset, is many times the size of others like it, and aims to be more diverse as well. MLCommons is a new nonprofit related to MLPerf, which has collected input from dozens of companies and academic institutions to create industry-standard benchmarks for machine learning performance. The endeavor has met with success, but in the process the team encountered a paucity of open data sets that everyone could use.