Goto

Collaborating Authors

Supervised Learning


Seth Meyers blasts Trump as a 'one-man super spreader' of coronavirus

Mashable

"At what point can we say Trump is actively putting people in harm's way?" asked Late Night host Seth Meyers on Thursday. "He holds indoor rallies, refuses to wear a mask, and wants to cut back on testing. While numerous other countries have successfully suppressed their coronavirus outbreaks, the pandemic has reached a new peak in the U.S. The country recorded 39,327 new cases on Thursday, beating the previous single-day record set on Wednesday. Meanwhile, Trump's administration intends to stop funding coronavirus testing sites at the end of June. "That's like a pilot turning off the seatbelt sign after they graze a mountain," said Meyers. "'Don't worry folks, we just nicked one of the Rockies.


Florida Coronavirus Cases Set Record; Positive Tests Also Up

U.S. News

Gov. Ron DeSantis last week said the upward trend in confirmed cases is mostly a reflection of more testing being conducted combined with some spikes in some agriculture communities, but the number of tests conducted daily peaked three weeks ago and the percentage of positive tests is now over 6%, more than double the rate of 2.3% in late May.


Feature extraction and similar image search with OpenCV for newbies

#artificialintelligence

Image features For this task, first of all, we need to understand what is an Image Feature and how we can use it. Image feature is a simple image pattern, based on which we can describe what we see on the image. For example cat eye will be a feature on a image of a cat. The main role of features in computer vision(and not only) is to transform visual information into the vector space. Ok, but how to get this features from the image?


The Latest: 52 Positive Cases Tied to Wisconsin Election

U.S. News

The state Department of Health Services reported the latest figures on Tuesday, three weeks after the April 7 presidential primary and spring election that drew widespread concern because of voters waiting in long lines to cast ballots in Milwaukee. Democratic Gov. Tony Evers tried to move to a mail-order election but was blocked by the Republican Legislature and conservative controlled Wisconsin Supreme Court.


A negative case analysis of visual grounding methods for VQA

arXiv.org Artificial Intelligence

Existing Visual Question Answering (VQA) methods tend to exploit dataset biases and spurious statistical correlations, instead of producing right answers for the right reasons. To address this issue, recent bias mitigation methods for VQA propose to incorporate visual cues (e.g., human attention maps) to better ground the VQA models, showcasing impressive gains. However, we show that the performance improvements are not a result of improved visual grounding, but a regularization effect which prevents over-fitting to linguistic priors. For instance, we find that it is not actually necessary to provide proper, human-based cues; random, insensible cues also result in similar improvements. Based on this observation, we propose a simpler regularization scheme that does not require any external annotations and yet achieves near state-of-the-art performance on VQA-CPv2.


Towards Exploiting Implicit Human Feedback for Improving RDF2vec Embeddings

arXiv.org Artificial Intelligence

RDF2vec is a technique for creating vector space embeddings from an RDF knowledge graph, i.e., representing each entity in the graph as a vector. It first creates sequences of nodes by performing random walks on the graph. In a second step, those sequences are processed by the word2vec algorithm for creating the actual embeddings. In this paper, we explore the use of external edge weights for guiding the random walks. As edge weights, transition probabilities between pages in Wikipedia are used as a proxy for the human feedback for the importance of an edge. We show that in some scenarios, RDF2vec utilizing those transition probabilities can outperform both RDF2vec based on random walks as well as the usage of graph internal edge weights.


Locality Preserving Loss to Align Vector Spaces

arXiv.org Machine Learning

We present a locality preserving loss (LPL)that improves the alignment between vector space representations (i.e., word or sentence embeddings) while separating (increasing distance between) uncorrelated representations as compared to the standard method that minimizes the mean squared error (MSE) only. The locality preserving loss optimizes the projection by maintaining the local neighborhood of embeddings that are found in the source, in the target domain as well. This reduces the overall size of the dataset required to the train model. We argue that vector space alignment (with MSE and LPL losses) acts as a regularizer in certain language-based classification tasks, leading to better accuracy than the base-line, especially when the size of the training set is small. We validate the effectiveness ofLPL on a cross-lingual word alignment task, a natural language inference task, and a multi-lingual inference task.


Embedding Java Classes with code2vec: Improvements from Variable Obfuscation

arXiv.org Machine Learning

Automatic source code analysis in key areas of software engineering, such as code security, can benefit from Machine Learning (ML). However, many standard ML approaches require a numeric representation of data and cannot be applied directly to source code. Thus, to enable ML, we need to embed source code into numeric feature vectors while maintaining the semantics of the code as much as possible. code2vec is a recently released embedding approach that uses the proxy task of method name prediction to map Java methods to feature vectors. However, experimentation with code2vec shows that it learns to rely on variable names for prediction, causing it to be easily fooled by typos or adversarial attacks. Moreover, it is only able to embed individual Java methods and cannot embed an entire collection of methods such as those present in a typical Java class, making it difficult to perform predictions at the class level (e.g., for the identification of malicious Java classes). Both shortcomings are addressed in the research presented in this paper. We investigate the effect of obfuscating variable names during the training of a code2vec model to force it to rely on the structure of the code rather than specific names and consider a simple approach to creating class-level embeddings by aggregating sets of method embeddings. Our results, obtained on a challenging new collection of source-code classification problems, indicate that obfuscating variable names produces an embedding model that is both impervious to variable naming and more accurately reflects code semantics. The datasets, models, and code are shared for further ML research on source code.


Nonparametric Contextual Bandits in Metric Spaces with Unknown Metric

Neural Information Processing Systems

Suppose that there is a large set of arms, yet there is a simple but unknown structure amongst the arm reward functions, e.g. We present a novel algorithm which learns data-driven similarities amongst the arms, in order to implement adaptive partitioning of the context-arm space for more efficient learning. We provide regret bounds along with simulations that highlight the algorithm's dependence on the local geometry of the reward functions. Papers published at the Neural Information Processing Systems Conference.


Search-Guided, Lightly-Supervised Training of Structured Prediction Energy Networks

Neural Information Processing Systems

In structured output prediction tasks, labeling ground-truth training output is often expensive. However, for many tasks, even when the true output is unknown, we can evaluate predictions using a scalar reward function, which may be easily assembled from human knowledge or non-differentiable pipelines. But searching through the entire output space to find the best output with respect to this reward function is typically intractable. In this paper, we instead use efficient truncated randomized search in this reward function to train structured prediction energy networks (SPENs), which provide efficient test-time inference using gradient-based search on a smooth, learned representation of the score landscape, and have previously yielded state-of-the-art results in structured prediction. In particular, this truncated randomized search in the reward function yields previously unknown local improvements, providing effective supervision to SPENs, avoiding their traditional need for labeled training data.