Collaborating Authors

CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations Artificial Intelligence

Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms. However, these models are facing challenges of overfitting with limited labels and low model generalization abilities. In this paper, we present a Cross-modal Transformer for Audio-and-Language, i.e., CTAL, which aims to learn the intra-modality and inter-modality connections between audio and language through two proxy tasks on a large amount of audio-and-language pairs: masked language modeling and masked cross-modal acoustic modeling. After fine-tuning our pre-trained model on multiple downstream audio-and-language tasks, we observe significant improvements across various tasks, such as, emotion classification, sentiment analysis, and speaker verification. On this basis, we further propose a specially-designed fusion mechanism that can be used in fine-tuning phase, which allows our pre-trained model to achieve better performance. Lastly, we demonstrate detailed ablation studies to prove that both our novel cross-modality fusion component and audio-language pre-training methods significantly contribute to the promising results.

Unsupervised Abstract Reasoning for Raven's Problem Matrices Artificial Intelligence

Raven's Progressive Matrices (RPM) is highly correlated with human intelligence, and it has been widely used to measure the abstract reasoning ability of humans. In this paper, to study the abstract reasoning capability of deep neural networks, we propose the first unsupervised learning method for solving RPM problems. Since the ground truth labels are not allowed, we design a pseudo target based on the prior constraints of the RPM formulation to approximate the ground truth label, which effectively converts the unsupervised learning strategy into a supervised one. However, the correct answer is wrongly labelled by the pseudo target, and thus the noisy contrast will lead to inaccurate model training. To alleviate this issue, we propose to improve the model performance with negative answers. Moreover, we develop a decentralization method to adapt the feature representation to different RPM problems. Extensive experiments on three datasets demonstrate that our method even outperforms some of the supervised approaches. Our code is available at

CSAGN: Conversational Structure Aware Graph Network for Conversational Semantic Role Labeling Artificial Intelligence

Conversational semantic role labeling (CSRL) is believed to be a crucial step towards dialogue understanding. However, it remains a major challenge for existing CSRL parser to handle conversational structural information. In this paper, we present a simple and effective architecture for CSRL which aims to address this problem. Our model is based on a conversational structure-aware graph network which explicitly encodes the speaker dependent information. We also propose a multi-task learning method to further improve the model. Experimental results on benchmark datasets show that our model with our proposed training objectives significantly outperforms previous baselines.

What is Deep Learning?


With representation learning,Deep learning must be major part of machine learning methods supported artificial neural networks . Deep learning models are supported artificial neural networks (ANN). Artificial Neural Networks (ANN) are computing systems. Artificial neuron are elementary units in a man-made Neural Network (ANN). Artificial neuron receives one or more inputs and sums them to supply an output.

Generating Explainable Rule Sets from Tree-Ensemble Learning Methods by Answer Set Programming Artificial Intelligence

Interpretability in machine learning is the ability to explain or to present in understandable terms to a human [8]. Interpretability is particularly important when, for example the goal of the user is to gain knowledge from some form of explanations about the data or process through machine learning models, or when making high-stakes decisions based on the outputs from the machine learning models where the user has to be able to trust the models. In this work we address the problem of explaining and understanding tree-ensemble learners by extracting meaningful rules from them. This problem is of practical relevance in business domains where the understanding of the behavior of high-performing machine learning models and extraction of knowledge in human readable form can aid users in the decision making process. We use Answer Set Programming (ASP) [14, 22] to generate rule sets from tree-ensembles.