Collaborating Authors Artificial Intelligence

Learning to Control using Image Feedback Artificial Intelligence

Learning to control complex systems using non-traditional feedback, e.g., in the form of snapshot images, is an important task encountered in diverse domains such as robotics, neuroscience, and biology (cellular systems). In this paper, we present a two neural-network (NN)-based feedback control framework to design control policies for systems that generate feedback in the form of images. In particular, we develop a deep $Q$-network (DQN)-driven learning control strategy to synthesize a sequence of control inputs from snapshot images that encode the information pertaining to the current state and control action of the system. Further, to train the networks we employ a direct error-driven learning (EDL) approach that utilizes a set of linear transformations of the NN training error to update the NN weights in each layer. We verify the efficacy of the proposed control strategy using numerical examples.

MOOMIN: Deep Molecular Omics Network for Anti-Cancer Drug Combination Therapy Artificial Intelligence

We propose the molecular omics network (MOOMIN) a multimodal graph neural network that can predict the synergistic effect of drug combinations for cancer treatment. Our model captures the representation based on the context of drugs at multiple scales based on a drug-protein interaction network and metadata. Structural properties of the compounds and proteins are encoded to create vertex features for a message-passing scheme that operates on the bipartite interaction graph. Propagated messages form multi-resolution drug representations which we utilized to create drug pair descriptors. By conditioning the drug combination representations on the cancer cell type we define a synergy scoring function that can inductively score unseen pairs of drugs. Experimental results on the synergy scoring task demonstrate that MOOMIN outperforms state-of-the-art graph fingerprinting, proximity preserving node embedding, and existing deep learning approaches. Further results establish that the predictive performance of our model is robust to hyperparameter changes. We demonstrate that the model makes high-quality predictions over a wide range of cancer cell line tissues, out-of-sample predictions can be validated with external synergy databases, and that the proposed model is data-efficient at learning.

Wasserstein Distance Maximizing Intrinsic Control Artificial Intelligence

Mutual information based objectives have shown some success in learning skills that reach a diverse set of states in this setting. These objectives include a KL-divergence term, which is maximized by visiting distinct states even if those states are not far apart in the MDP. This paper presents an approach that rewards the agent for learning skills that maximize the Wasserstein distance of their state visitation from the start state of the skill. It shows that such an objective leads to a policy that covers more distance in the MDP than diversity based objectives, and validates the results on a variety of Atari environments.

CLLD: Contrastive Learning with Label Distance for Text Classificatioin Artificial Intelligence

Existed pre-trained models have achieved state-of-the-art performance on various text classification tasks. These models have proven to be useful in learning universal language representations. However, the semantic discrepancy between similar texts cannot be effectively distinguished by advanced pre-trained models, which have a great influence on the performance of hard-to-distinguish classes. To address this problem, we propose a novel Contrastive Learning with Label Distance (CLLD) in this work. Inspired by recent advances in contrastive learning, we specifically design a classification method with label distance for learning contrastive classes. CLLD ensures the flexibility within the subtle differences that lead to different label assignments, and generates the distinct representations for each class having similarity simultaneously. Extensive experiments on public benchmarks and internal datasets demonstrate that our method improves the performance of pre-trained models on classification tasks. Importantly, our experiments suggest that the learned label distance relieve the adversarial nature of interclasses.

Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations Artificial Intelligence

We present a neural analysis and synthesis (NANSY) framework that can manipulate voice, pitch, and speed of an arbitrary speech signal. Most of the previous works have focused on using information bottleneck to disentangle analysis features for controllable synthesis, which usually results in poor reconstruction quality. We address this issue by proposing a novel training strategy based on information perturbation. The idea is to perturb information in the original input signal (e.g., formant, pitch, and frequency response), thereby letting synthesis networks selectively take essential attributes to reconstruct the input signal. Because NANSY does not need any bottleneck structures, it enjoys both high reconstruction quality and controllability. Furthermore, NANSY does not require any labels associated with speech data such as text and speaker information, but rather uses a new set of analysis features, i.e., wav2vec feature and newly proposed pitch feature, Yingram, which allows for fully self-supervised training. Taking advantage of fully self-supervised training, NANSY can be easily extended to a multilingual setting by simply training it with a multilingual dataset. The experiments show that NANSY can achieve significant improvement in performance in several applications such as zero-shot voice conversion, pitch shift, and time-scale modification.

Generalized Anomaly Detection Artificial Intelligence

We study anomaly detection for the case when the normal class consists of more than one object category. This is an obvious generalization of the standard one-class anomaly detection problem. However, we show that jointly using multiple one-class anomaly detectors to solve this problem yields poorer results as compared to training a single one-class anomaly detector on all normal object categories together. We further develop a new anomaly detector called DeepMAD that learns compact distinguishing features by exploiting the multiple normal objects categories. This algorithm achieves higher AUC values for different datasets compared to two top performing one-class algorithms that either are trained on each normal object category or jointly trained on all normal object categories combined. In addition to theoretical results we present empirical results using the CIFAR-10, fMNIST, CIFAR-100, and a new dataset we developed called RECYCLE.

End-to-End Speech Emotion Recognition: Challenges of Real-Life Emergency Call Centers Data Recordings Artificial Intelligence

Recognizing a speaker's emotion from their speech can be a key element in emergency call centers. End-to-end deep learning systems for speech emotion recognition now achieve equivalent or even better results than conventional machine learning approaches. In this paper, in order to validate the performance of our neural network architecture for emotion recognition from speech, we first trained and tested it on the widely used corpus accessible by the community, IEMOCAP. We then used the same architecture as the real life corpus, CEMO, composed of 440 dialogs (2h16m) from 485 speakers. The most frequent emotions expressed by callers in these real life emergency dialogues are fear, anger and positive emotions such as relief. In the IEMOCAP general topic conversations, the most frequent emotions are sadness, anger and happiness. Using the same end-to-end deep learning architecture, an Unweighted Accuracy Recall (UA) of 63% is obtained on IEMOCAP and a UA of 45.6% on CEMO, each with 4 classes. Using only 2 classes (Anger, Neutral), the results for CEMO are 76.9% UA compared to 81.1% UA for IEMOCAP. We expect that these encouraging results with CEMO can be improved by combining the audio channel with the linguistic channel. Real-life emotions are clearly more complex than acted ones, mainly due to the large diversity of emotional expressions of speakers. Index Terms-emotion detection, end-to-end deep learning architecture, call center, real-life database, complex emotions.

Diagnosis of COVID-19 Using Machine Learning and Deep Learning: A review Artificial Intelligence

Background: This paper provides a systematic review of the application of Artificial Intelligence (AI) in the form of Machine Learning (ML) and Deep Learning (DL) techniques in fighting against the effects of novel coronavirus disease (COVID-19). Objective & Methods: The objective is to perform a scoping review on AI for COVID-19 using preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines. A literature search was performed for relevant studies published from 1 January 2020 till 27 March 2021. Out of 4050 research papers available in reputed publishers, a full-text review of 440 articles was done based on the keywords of AI, COVID-19, ML, forecasting, DL, X-ray, and Computed Tomography (CT). Finally, 52 articles were included in the result synthesis of this paper. As part of the review, different ML regression methods were reviewed first in predicting the number of confirmed and death cases. Secondly, a comprehensive survey was carried out on the use of ML in classifying COVID-19 patients. Thirdly, different datasets on medical imaging were compared in terms of the number of images, number of positive samples and number of classes in the datasets. The different stages of the diagnosis, including preprocessing, segmentation and feature extraction were also reviewed. Fourthly, the performance results of different research papers were compared to evaluate the effectiveness of DL methods on different datasets. Results: Results show that residual neural network (ResNet-18) and densely connected convolutional network (DenseNet 169) exhibit excellent classification accuracy for X-ray images, while DenseNet-201 has the maximum accuracy in classifying CT scan images. This indicates that ML and DL are useful tools in assisting researchers and medical professionals in predicting, screening and detecting COVID-19.

D2RLIR : an improved and diversified ranking function in interactive recommendation systems based on deep reinforcement learning Artificial Intelligence

Recently, interactive recommendation systems based on reinforcement learning have been attended by researchers due to the consider recommendation procedure as a dynamic process and update the recommendation model based on immediate user feedback, which is neglected in traditional methods. The existing works have two significant drawbacks. Firstly, inefficient ranking function to produce the Top-N recommendation list. Secondly, focusing on recommendation accuracy and inattention to other evaluation metrics such as diversity. This paper proposes a deep reinforcement learning based recommendation system by utilizing Actor-Critic architecture to model dynamic users' interaction with the recommender agent and maximize the expected long-term reward. Furthermore, we propose utilizing Spotify's ANNoy algorithm to find the most similar items to generated action by actor-network. After that, the Total Diversity Effect Ranking algorithm is used to generate the recommendations concerning relevancy and diversity. Moreover, we apply positional encoding to compute representations of the user's interaction sequence without using sequence-aligned recurrent neural networks. Extensive experiments on the MovieLens dataset demonstrate that our proposed model is able to generate a diverse while relevance recommendation list based on the user's preferences.

Cooperative Deep $Q$-learning Framework for Environments Providing Image Feedback Artificial Intelligence

In this paper, we address two key challenges in deep reinforcement learning setting, sample inefficiency and slow learning, with a dual NN-driven learning approach. In the proposed approach, we use two deep NNs with independent initialization to robustly approximate the action-value function in the presence of image inputs. In particular, we develop a temporal difference (TD) error-driven learning approach, where we introduce a set of linear transformations of the TD error to directly update the parameters of each layer in the deep NN. We demonstrate theoretically that the cost minimized by the error-driven learning (EDL) regime is an approximation of the empirical cost and the approximation error reduces as learning progresses, irrespective of the size of the network. Using simulation analysis, we show that the proposed methods enables faster learning and convergence and requires reduced buffer size (thereby increasing the sample efficiency).