representation


Building a future of friendship between humans and bots

#artificialintelligence

There is no exact algorithm that explains how people make friends. One of the things word vectors do really well is to take natural language text and build a vector form of each word. You have three words (attack, defend, assail) in your vocabulary and you can form a lot of sentences with these three words. All I need is a similar vector representation of movies that the "friendship algorithm" can understand.


Deduplicating Massive Datasets with Locality Sensitive Hashing

#artificialintelligence

The similarity of those documents can then simply be defined as the Jaccard similarity of the two sets of shingles; the number of elements (shingles) they have in common as a proportion of the combined size of the two sets, or the size of the intersection divided by the size of the union. This should all be fine however, as we've already defined the task to be about finding near duplicate documents not semantically similar ones, for document collections with longer documents this method should work very well. The problem is that finding those duplicates took quite a long time as computing the Jaccard similarity of the documents requires comparing every document to every other document, this approach is clearly not scalable. Locality Sensitive Hashing (LSH) is a generic hashing technique that aims, as the name suggests, to preserve the local relations of the data while significantly reducing the dimensionality of the dataset.


The Beginner's Guide to Text Vectorization MonkeyLearn Blog

@machinelearnbot

Using huge amounts of data, it is possible to have a neural network learn good vector representations of words that have some desirable properties like being able to do math with them. In the field of machine learning, transfer learning is the ability of the machine to use some of the learned concepts in one task for another different task. The idea behind this algorithm is the following: in the same way we can get a good word vector representation by using a neural network that tries to predict the surrounding words of a word, they use a neural network to predict the surrounding sentences of a sentence. Facebook's InferSent uses a similar approach, but instead of using machine translation, they use a neural network that learns to classify the Stanford Natural Language Inference (SNLI) Corpus and while doing this, they also get good text vectorization.


Probabilistic Graphical Models 2: Inference Coursera

@machinelearnbot

These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are also a foundational tool in formulating many machine learning problems. Following the first course, which focused on representation, this course addresses the question of probabilistic inference: how a PGM can be used to answer questions. The (highly recommended) honors track contains two hands-on programming assignments, in which key routines of the most commonly used exact and approximate algorithms are implemented and applied to a real-world problem.


In Raw Numpy: t-SNE

@machinelearnbot

To ensure the perplexity of each row of \(P\), \(Perp(P_i)\), is equal to our desired perplexity, we simply perform a binary search over each \(\sigma_i\) until \(Perp(P_i) \) our desired perplexity. It takes a matrix of negative euclidean distances and a target perplexity. Let's also define a p_joint function that takes our data matrix \(\textbf{X}\) and returns the matrix of joint probabilities \(P\), estimating the required \(\sigma_i\)'s and conditional probabilities matrix along the way: So we have our joint distributions \(p\) and \(q\). The only real difference is how we define the joint probability distribution matrix \(Q\), which has entries \(q_{ij}\).


Highlights of EMNLP 2017: Exciting Datasets, Return of the Clusters, and More! - AYLIEN

@machinelearnbot

Four members of our research team spent the past week at the Conference on Empirical Methods in Natural Language Processing (EMNLP 2017) in Copenhagen, Denmark. The current generation of deep learning models is excellent at learning from data. The Subword and Character-level Models in NLP workshop discussed approaches in more detail, with invited talks on subword language models and character-level NMT. Learning better sentence representations is closely related to learning more general word representations.


Four deep learning trends from ACL 2017

#artificialintelligence

Though attention often plays the role of word alignment in NMT, Koehn and Knowles note that it learns to play other, harder-to-understand roles too; thus it is not always as understandable as we might hope. In Parameter Free Hierarchical Graph-Based Clustering for Analyzing Continuous Word Embeddings, Trost and Klakow perform clustering on word embeddings, then cluster those clusters, and so on to obtain a hierarchical tree-like structure. Neural networks are powerful because they can learn arbitrary continuous representations, but humans find discrete information – like language itself – easier to understand. These systems should ideally produce a proof or derivation of the answer – for a semantic parsing question answering system, this might be the semantic parse itself, or a relevant excerpt from the knowledge base.


Apple's Portrait Lighting uses AI to color our memories

Engadget

People already hate inane Snapchat-like AI photo filters, but a new trick called Portrait Lighting on Apple's iPhone 8 and X might cause even more dismay. The new Portrait Lighting modes work on both the rear camera and front camera for selfies. Quite the opposite -- they give the average user a nice way to create sweet portraits and selfies, and other photographers are sanguine about Apple's post processing. However, with Apple's Portrait Lighting, machine learning is involved, so as Elon Musk, Bill Gates, Stephen Hawking and others have warned us, it's important to consider where it's going, not just where it is now.


The secret language of chatbots

@machinelearnbot

Sensationalist details grew like a snowball, and by the end of July, Independent's summary of the story was: "Facebook's artificial intelligence robots shut down after they start talking to each other in their own language… Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood." Today, these bots know to delegate tasks to predefined web services; some attempts are made to build dynamic cloud catalogues of "how-tos" redirecting to the correct web service. As early as in 2001, so-called Battle Management Language was proposed to control "human troops, simulated troops, and future robotic forces" (yes, there is a good reason to post that Terminator image again). Research of Igor Mordatch from OpenAI (mentioned in some articles about the Facebook experiment) focuses on attempts to get the bots to develop their own language in a limited universe.


An introduction to representation learning

#artificialintelligence

In representation learning, features are extracted from unlabeled data by training a neural network on a secondary, supervised learning task. Word2vec makes NLP problems like these easier to solve by providing the learning algorithm with pre-trained word embeddings, effectively removing the word meaning subtask from training. Representation learning algorithms give B2B companies like Red Hat the ability to better optimize business strategies with limited historical context by extracting meaningful information from unlabeled data. Sample web activity data used to discover Red Hat customer vectors with doc2vec.