Goto

Collaborating Authors

Supervised Learning


How to Use Arabic Word2Vec Word Embedding with LSTM

#artificialintelligence

Word embedding is the approach of learning word and their relative meanings from a corpus of text and representing the word as a dense vector. The word vector is the projection of the word into a continuous feature vector space, see Figure 1 (A) for clarity. Words that have similar meaning should be close together in the vector space as illustrated in see Figure 1 (B). Word2vec is one of the most popular words embedding in NLP. Word2vec has two types, Continuous Bag-of-Words Model (CBOW) and Continuous Skip-gram Model [3], the model architectures are shown in Figure 2. CBOW predicts the word according to the given context, where Skip-gram predicts the context according to the given word, which increases the computational complexity [3].


A Massively Multilingual Analysis of Cross-linguality in Shared Embedding Space

arXiv.org Artificial Intelligence

In cross-lingual language models, representations for many different languages live in the same space. Here, we investigate the linguistic and non-linguistic factors affecting sentence-level alignment in cross-lingual pretrained language models for 101 languages and 5,050 language pairs. Using BERT-based LaBSE and BiLSTM-based LASER as our models, and the Bible as our corpus, we compute a task-based measure of cross-lingual alignment in the form of bitext retrieval performance, as well as four intrinsic measures of vector space alignment and isomorphism. We then examine a range of linguistic, quasi-linguistic, and training-related features as potential predictors of these alignment metrics. The results of our analyses show that word order agreement and agreement in morphological complexity are two of the strongest linguistic predictors of cross-linguality. We also note in-family training data as a stronger predictor than language-specific training data across the board. We verify some of our linguistic findings by looking at the effect of morphological segmentation on English-Inuktitut alignment, in addition to examining the effect of word order agreement on isomorphism for 66 zero-shot language pairs from a different corpus. We make the data and code for our experiments publicly available.


College admissions scam case set for Sept. 8 trial in Boston

Boston Herald

USC's Pat Haden and now two "Varsity Blues" defendants want to file briefs in the college admissions scam case under seal. What they want to share, they argue, is "sensitive, confidential, and personally identifiable information." Haden, the former athletic director at the University of Southern California, has filed a motion in federal court in Boston to "quash a trial subpoena for testimony issued by counsel for defendants," as the Herald has reported. He was just granted permission to state his case in private. Defendants Gamal Abdelaziz and John Wilson are seeking that same protection to keep their arguments out of the public eye -- for now.


MatSat: a matrix-based differentiable SAT solver

arXiv.org Artificial Intelligence

We propose a new approach to SAT solving which solves SAT problems in vector spaces as a cost minimization problem of a non-negative differentiable cost function J^sat. In our approach, a solution, i.e., satisfying assignment, for a SAT problem in n variables is represented by a binary vector u in {0,1}^n that makes J^sat(u) zero. We search for such u in a vector space R^n by cost minimization, i.e., starting from an initial u_0 and minimizing J to zero while iteratively updating u by Newton's method. We implemented our approach as a matrix-based differential SAT solver MatSat. Although existing main-stream SAT solvers decide each bit of a solution assignment one by one, be they of conflict driven clause learning (CDCL) type or of stochastic local search (SLS) type, MatSat fundamentally differs from them in that it continuously approach a solution in a vector space. We conducted an experiment to measure the scalability of MatSat with random 3-SAT problems in which MatSat could find a solution up to n=10^5 variables. We also compared MatSat with four state-of-the-art SAT solvers including winners of SAT competition 2018 and SAT Race 2019 in terms of time for finding a solution, using a random benchmark set from SAT 2018 competition and an artificial random 3-SAT instance set. The result shows that MatSat comes in second in both test sets and outperforms all the CDCL type solvers.


'Jane' Starring Madelaine Petsch Delays Filming Due To COVID-19 Cases On Set

International Business Times

Startup studio and streaming service Creator Plus delayed its filming schedule for "Jane" after two COVID-19 cases were confirmed on set in New Mexico. In a statement obtained by Variety, Creator Plus said the cases were detected "while adhering to strict safety daily testing protocols." "As a result, we immediately implemented a six-day shutdown, which started yesterday (as a half day) from the initial case we received. All lead actors are continuing to test negative despite exposure. We're working closely with our SAG representatives, the CDC and the All Together New Mexico'COVID Safe Practices for Individuals and Employers' while upholding SAG's Return to Work agreement," the company said in a statement Wednesday.


CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding

arXiv.org Artificial Intelligence

Despite pre-trained language models have proven useful for learning high-quality semantic representations, these models are still vulnerable to simple perturbations. Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, neglecting the utilization of different or even opposite semantics. Different from the image processing field, the text is discrete and few word substitutions can cause significant semantic changes. To study the impact of semantics caused by small perturbations, we conduct a series of pilot experiments and surprisingly find that adversarial training is useless or even harmful for the model to detect these semantic changes. To address this problem, we propose Contrastive Learning with semantIc Negative Examples (CLINE), which constructs semantic negative examples unsupervised to improve the robustness under semantically adversarial attacking. By comparing with similar and opposite semantic examples, the model can effectively perceive the semantic changes caused by small perturbations. Empirical results show that our approach yields substantial improvements on a range of sentiment analysis, reasoning, and reading comprehension tasks. And CLINE also ensures the compactness within the same semantics and separability across different semantics in sentence-level.


See, Hear, Explore: curiosity via audio-visual association

AIHub

To compute audio features, we take an audio clip spanning 4 time steps (th of a second for these 60 frame per second environments) and apply a Fast Fourier Transform (FFT). The FFT output is downsampled using max pooling to a 512-dimensional feature vector, which is used as input to the discriminator along with a 512-dimensional visual feature vector.


Probing Pre-Trained Language Models for Disease Knowledge

arXiv.org Artificial Intelligence

Pre-trained language models such as ClinicalBERT have achieved impressive results on tasks such as medical Natural Language Inference. At first glance, this may suggest that these models are able to perform medical reasoning tasks, such as mapping symptoms to diseases. However, we find that standard benchmarks such as MedNLI contain relatively few examples that require such forms of reasoning. To better understand the medical reasoning capabilities of existing language models, in this paper we introduce DisKnE, a new benchmark for Disease Knowledge Evaluation. To construct this benchmark, we annotated each positive MedNLI example with the types of medical reasoning that are needed. We then created negative examples by corrupting these positive examples in an adversarial way. Furthermore, we define training-test splits per disease, ensuring that no knowledge about test diseases can be learned from the training data, and we canonicalize the formulation of the hypotheses to avoid the presence of artefacts. This leads to a number of binary classification problems, one for each type of reasoning and each disease. When analysing pre-trained models for the clinical/biomedical domain on the proposed benchmark, we find that their performance drops considerably.


Gradual Domain Adaptation in the Wild:When Intermediate Distributions are Absent

arXiv.org Artificial Intelligence

We focus on the problem of domain adaptation when the goal is shifting the model towards the target distribution, rather than learning domain invariant representations. It has been shown that under the following two assumptions: (a) access to samples from intermediate distributions, and (b) samples being annotated with the amount of change from the source distribution, self-training can be successfully applied on gradually shifted samples to adapt the model toward the target distribution. We hypothesize having (a) is enough to enable iterative self-training to slowly adapt the model to the target distribution, by making use of an implicit curriculum. In the case where (a) does not hold, we observe that iterative self-training falls short. We propose GIFT, a method that creates virtual samples from intermediate distributions by interpolating representations of examples from source and target domains. We evaluate an iterative-self-training method on datasets with natural distribution shifts, and show that when applied on top of other domain adaptation methods, it improves the performance of the model on the target dataset. We run an analysis on a synthetic dataset to show that in the presence of (a) iterative-self-training naturally forms a curriculum of samples. Furthermore, we show that when (a) does not hold, GIFT performs better than iterative self-training.


Multi-output Gaussian Processes for Uncertainty-aware Recommender Systems

arXiv.org Machine Learning

A database describing such user-item interactions often takes the form of a matrix, where each entry describes the interaction between one user and one item. The overall Recommender systems are often designed based rating or purchasing pattern of a user can therefore be described on a collaborative filtering approach, where user by the corresponding row in such a matrix. However, preferences are predicted by modelling interactions since there are typically large numbers of users and items between users and items. Many common approaches in the database, and each user is usually only interested in to solve the collaborative filtering task a small subset of items, this user-item matrix is often large are based on learning representations of users and and sparse. It is therefore inefficient to define the similarity items, including simple matrix factorization, Gaussian between users in the high dimensional feature space defined process latent variable models, and neuralnetwork by all items. Instead, it is more advantageous to derive abstract based embeddings. While matrix factorization feature vectors that represent users and items, which approaches fail to model nonlinear relations, inspired a large variety of low-rank matrix decomposition neural networks can potentially capture such models such as non-negative matrix decomposition [Zhang complex relations with unprecedented predictive et al., 2006], biased matrix decomposition [Koren et al., power and are highly scalable. However, neither 2009] and non-parametric decomposition [Yu et al., 2009]. of them is able to model predictive uncertainties. These methods aim at learning low dimensional representations In contrast, Gaussian Process based models can for all users and items, allowing for the prediction of generate a predictive distribution, but cannot scale the unobserved interaction between a new pair of user and to large amounts of data.